Developing Meta’s Orion AR Glasses
November 14, 2025
Transcript
Jinsong Yu: Today we’re going to talk about Orion. Before we talk about AR glasses, I want to cite some terminology, basic stuff. You may have heard VR, virtual reality. You may have heard MR, mixed reality, and AR, augmented reality. Let me just give you a rundown on what the terminologies are. Starting with VR, virtual reality. Many of you may have played VR games. You may be staying in a corridor shooting aliens. You may be in a submarine. When you play VR games, you actually do not want to see the physical environment, you want to be fully immersed in the virtual world. A VR device, by design, will block your senses from the physical environment. That’s VR. MR, mixed reality, is one step further from VR, so the device still blocks the light, but sometimes you want to see what’s going on in the environment.
What you do is basically put a couple cameras in front of the device, then the information just goes through the camera, goes through the electronics, goes on the display, and so that’s a passthrough. You’ll see the environment, but it’s actually going through the camera to the display. The physical device is still blocking the light. That’s mixed reality. AR, augmented reality, is something completely different. We’re talking about glasses. You actually see your physical environment without any electronics, the lenses are transparent, and you put augments on the display as well. Imagine you have a physical wall in your room, you can put a virtual TV there, or you have a physical sofa and you put a virtual pad on the sofa. That’s augmented reality. That’s what we’re talking about here.
Meta’s Orion (AR Glasses)
Before we dive into the technical details, I want to introduce the product. I’m not a product manager, I’m an engineer, so I figured we introduce the best product manager in Meta to talk about the product. Our best product manager is Mark Zuckerberg. I caught him on video. Even though he’s not here, he will do the product introduction for us.
Zuckerberg: This is Orion, our first fully functioning prototype. If I do say so, the most advanced glasses the world has ever seen. About a decade ago, I started putting together a team of the best people in the world to build these glasses. The requirements are actually pretty simple, but the technical challenges to make them are insane. They need to be glasses. They’re not a headset. No wires. Less than 100 grams. They need wide field of view, holographic displays. Sharp enough to pick up details. Bright enough to see in different lighting conditions. Large enough to display a cinema screen or multiple monitors for working wherever you go, whether you’re in a coffee shop or on a plane, or wherever you are. You need to be able to see through them.
People need to be able to see through them, too, and make eye contact with you. This isn’t passthrough. This is the physical world with holograms overlaid on it. If someone messages you, you will see that. Instead of having to pull out your phone, there will just be a little hologram, and with a few subtle gestures you can reply without getting pulled away from the moment. Or if you want to be with someone who is far away, they’re going to be able to teleport as a hologram into your living room as if they’re right there with you. You’re going to be able to tap your fingers and bring up a game of cards or chess or holographic ping pong, or whatever it is that you want to do together. You can work or play or whatever.
Jinsong Yu: That’s the product. It’s quite amazing. Everyone that’s tried the product is really amazed by the experience. A lot of you may have this question, what about Quest? Or maybe even, what about Apple Vision Pro? I really like Apple Vision Pro. I really like Meta Quest. They’re just different products. Because they’re different products, they have different goals. They make different design tradeoffs.
For example, if you look at Apple Vision Pro, the headset is almost 600 grams. You’re literally wearing a brick on your head. Even with 600 grams, there’s a wire coming out. Why? Because there’s a battery pack. The battery pack itself is another 350 grams. They cannot put it on the head. It’s too heavy. Whereas for this one, we make very different tradeoffs. If you put AVP on, you get the awesome display. Their goal is to get the best display, and they did. No question. This one, if you compare the display, it’s not as good as Apple Vision Pro. It’s not even as good as Meta Quest. The key here is 100 grams. You can actually wear it comfortably on your face. You actually see the physical world. That’s the product experience we want to get. They’re just very different product tradeoffs. We’re going to talk about three parts of the product.
Obviously, you have the glasses. That’s the highlight of everything. That’s the focus. We also have the EMG band. I’ll talk in detail. That we use for input. Finally, we have the compute puck. The reason we have the compute puck is we want to offload as much compute as possible from the glasses so that we can keep the glasses lightweight and comfortable to wear.
The Orion Glasses
Let’s turn attention to the most exciting technology. The display here, actually, we spent years perfecting. It was extremely hard. When we started, not just Meta doesn’t know how to do this. Nobody in the world knew how to do this at all. What we end up doing is we have Micro LED projectors. They’re on the temple arm, the front side of the temple arm. We actually did our own custom Micro LED projector. The reason we did our own was we got to the tiny pixels, the smallest, the finest grain of the pixel density in the whole world. Nobody else did this before. We had to do our own Micro LED projector. That was only the starting point of the journey of the photon. You can imagine the photon gets generated by the Micro LED projector, but then it needs to get into your eye.
The projector is not in front of you, it’s actually behind you. What happened is in the lenses, there is a waveguide. The way the waveguide works is a photon coming out of the Micro LED from the side here, gets into the lens and gets reflected many times back and forth, back and forth on the lens until the right time and the right place for the photon to hit your eye.
Then it comes out of the waveguide into your eye. You can imagine how precise you need to control the precision. For people who wear glasses like me, you’re probably aware, for normal glasses, the refraction index is relatively low. To get higher index, you actually go high index plastic like what I’m wearing right now. Even that is not high enough. If we use high index plastic like this to build a display, the display is going to be really big and really thick. You do not want to wear that in your face. We end up using silicon carbide. Again, it’s something that nobody in the world knew how to do. Silicon carbide is not new, but using silicon carbide to build the lenses and the waveguide, that was completely new. We spent years perfecting the technique, the process to make it actually work.
The end result is we have this really big field of view, 70 degrees field of view. Everyone who tries on the glasses gets impressed by, it actually feels like you’re putting augment on something live. That is a wow factor. There are many cameras on the glasses. You may or may not be able to see. There are four outward-facing cameras. Their purpose is to figure out the geometry of the environment and where the glass is relative to the environmental geometry. I’ll talk about that later when I talk about the world-locked rendering stuff. There are also inward-facing cameras for eye tracking. It can figure out what you’re looking at. What you’re looking at actually becomes an important input signal into the system. There are speakers, batteries. Like I said, everything is connected wirelessly. There is no wire coming out of this. It has a 2-hour battery life. It’s pretty amazing.
This is the same device. We build a handful of the devices with transparent casing, so you can see through and see what’s inside. There’s not a single millimeter gap in there that’s not packed by something useful. Everything is packed with microelectronics. One thing I want to call out. Usually, when you think about this kind of device, you make an analogy to a computer. When you think about a computer, you get a CPU. Usually, when you think about small devices like a phone, like a watch, there’s SoC, system on chip. You pack CPU in there. You pack GPU in there. You may put your DRAM on there as well, and I/O controllers, a bunch of things. What we have here is different. We don’t have a single CPU. We don’t have a single SoC.
Instead, we had 11 microcontroller units. They’re custom-designed silicon. You may ask, why? Why go that far? The problem is thermal and heat dissipation. Imagine we have a single SoC like a phone or watch. There’s a lot of compute going on on these glasses. I just said there are four outward-facing cameras computing the geometry of the environment. There are two inward-facing cameras computing your eye positioning. There’s a lot of compute going on.
If we put all the compute load on a single CPU or SoC, then what happens is that tiny square will generate a lot of heat, and we do not know how to dissipate that heat. That will become a hot spot, and anything surrounding that square will melt. Instead, we put 11 microcontroller units in there. You can imagine the complexity it brings. It’s like on this headset, on these glasses, you’re working with a distributed computer cluster. Everything becomes complicated. You want to debug. You want to figure out what’s going on. You’re debugging distributed computing here. Even booting is complicated, just to make the 11 microcontrollers boot up and coordinate and start talking to each other. Everything is complex. Performance work is complex. Software update is complex. We spent a lot of time dealing with the issues there.
You may ask, “You’re lying here. Because you just said there are three parts. One of them is the compute puck. The whole reason why the compute puck exists is to offload the compute. Why do you have 11 microcontrollers on here? Why don’t you just send everything down to the compute puck? Let the compute puck do the work. Done”. There’s another problem. Processing the signals coming from the camera and doing the compute, that is expensive. It turns out if you want to transmit the data wirelessly to the compute puck, depending on the workload, it actually might be even more expensive to do the wireless transmission. Pretend for a moment that the compute puck is completely free. There’s no battery life concern. There’s no heat dissipation concern. Nothing. It’s completely free.
Just to optimize the thermal on the glasses itself, you actually want to minimize the information transmitted from the glasses down to the compute. We did the math. There’s no comparison. We have to do the compute on the glasses. If we transmit everything to the compute puck, the wireless transmission will consume so much power. Even with all the compute happening on the glasses, we already minimized the transmission between the two units. It turns out the chipset that does the wireless transmission on the glasses is still the hottest hotspot on the glasses. Just to give you intuition what we’re talking about here.
The EMG Band
The next one, the EMG band, surface electromyography. The way electromyography works, or EMG works, is very interesting. Let’s say you want to do some hand gesture like this. Clicking your index finger with your thumb, like this gesture. Your brain sends a signal, the command, through your nerve system. It travels to the hand and the arm. It activates the muscles. The contraction of the muscle actually generates tiny electric signals. If you have sensors that are sensitive enough to pick up the signal, you can reverse engineer and figure out what your hand is trying to do. I said surface electromyography. The surface part is the keyword here.
In order to use the product, we’re not inviting you to an operation room where there’s a nice doctor cutting you up, plant some electrodes. We’re not doing that. You just wear the band on your arm like a watch. Nothing incisive. Nothing protrudes into your skin. We just pick up the electrical signal from the surface of the skin. That’s it. The electrical signal on the skin actually is extremely faint. They’re super minute. We’re talking about micro-volt level electrical signal. We have multiple sensors because we need to pick up signal from the different muscle groups. The sensors are super sensitive. We do amplification. After amplification, we do a lot of signal processing, feed into machine learning model.
The machine learning model will figure out what gesture you’re trying to use. There’s another challenge here. This is a very small device, just like a watch. The battery in there is tiny. We want that tiny battery to last the whole day, 18-hour battery life. Every single milliwatt of battery consumption actually matters here. It’s not that hard to train a model in the data center that works really well. It’s really hard to distill the model to run on the band itself at micro-volt power consumption and still be able to do all the machine learning inference and do the classification. A lot of engineering and science work went into there to make the model small, to make the model power efficient, but still have very high accuracy.
The Compute Puck
This is the last part of the three-part device. This is a compute puck. It shows three different views of the same device. In a way, compared with the other two, this is the least sexy, most boring one. It’s a standard compute architecture. We just run Android without the display. You can think of it as an Android phone with no screen. It’s actually the wrong way to look at it. Because if you think about the user experience, this is the brain. You can actually think of the glasses and the band as a peripheral. Everything that matters actually happens right here. All the business logic, all the presentation logic, everything runs on the compute puck. I’ll give you a funny anecdote.
A year ago, April 2024, we were still in the heat of doing software development and optimization. When we started, we thought the thermal problem on the glasses will be a major challenge, and we didn’t anticipate the thermal on the compute puck to be that bad. Last year, this time, the compute puck was running extremely hot. When we need to do demos, we actually have to put cold soda can on top to stop it from melting. Of course, we spent several months doing very serious performance work, reducing memory copies, rethinking the component boundaries and APIs, doing optimization on scheduling and signaling, all that stuff. A lot of engineering grinding to get the thermal down. Then we can run it in a reasonable temperature.
Architecture Highlights
That was about the device. I want to now change gears to talk about two architectural highlights, so how the three parts work together to build the AR experiences. The first part is world-locked rendering. This is very AR specific. Before we talk about the computing, before we talk about AR glasses, let’s first think about how the human eye and the human brain work. Imagine you’re looking at me, and then you move your head to the left. Imagine you see, when you move your head, me just magically get transported off the stage that way. That’s super confusing, super surprising. It’s disorienting. The naive way of implementing display on the glasses, that will happen. The naive way is, I just show the image, whatever I need to show on the glasses, I’m done. If the glasses move, the image moves with the glasses. That’s called head-locked rendering.
If you do head-locked rendering, then if you move your head, you move your body, whatever content rendered will move with it. That is very disorienting. The world-locked rendering actually works very differently. You actually make the virtual object anchored in the physical world. Over millions of years of evolution, the human eye and brain already worked out how to make it work. If you’re looking at me and you move your eye and move your body, you spend no effort, but your eye will keep looking at me, even though your head moves, your body moves, your eyeball just moves to compensate the body movement and your head movement. World-locked rendering is where we put the virtual content in the display, anchored on the physical world.
In order to make this work, it’s actually pretty complex, if you think about it. We talk about SLAM. SLAM stands for Simultaneous Localization and Mapping. Mapping is how you figure out the geometry of the room. Like, there is a wall there. There’s a table there. There’s a sofa there. You build a point cloud to reprint the wall, the sofa, the table, the furniture, the human. Mapping is to map out the geometry of the environment. Localization is where you are in relative position of the physical environment. VIO is very related to SLAM. VIO stands for Visual Inertial Odometry. What you do is you actually use multiple cameras. One camera is not enough. You need multiple cameras. Also, you use your IMO sensors. Together, you build VIO. VIO will, at runtime, real time, figure out where you are relative to the environment, where your relative angle is.
From that, having that complete information, now you can do world-locked rendering. The way it works is you communicate the relative position of the physical environment down to the compute puck. Compute puck will render the content with the relative positioning in mind. It renders the 3D content and says, so I’m rendering this virtual CAD or that virtual screen, whatever there, but I’m assuming the camera position, the eye position, is right here. Then that information gets pushed to the glasses at relatively high frequency, but not high enough. We actually figured out that in order for the motion to be very smooth, don’t get any discomfort when you look at it, we need to recalculate at 90 hertz. That’s very high frequency. Remember, I just said pushing bits and bytes over the wireless link is expensive. We actually cannot afford to push that information at 90 hertz. What we end up doing is we push that information at a lower frequency. It’s still relatively high, but not at 90 hertz.
Then we do another round of computation. The glasses do SLAM and VIO at high frequency, at 90 hertz. If your head moves a tiny little bit, it still takes the old frame computed from the compute puck, then it does re-projection real-time at 90 hertz to readjust how your eye should see the content. That way, the content stays where it is in a very smooth and non-surprising fashion. Remember, at the very beginning, I talked about the wide field of view, 70-degree field of view. Usually, we actually don’t use the entire 70-degree field of view. It’s actually a smaller field of view. The reason is because of the world-locked rendering. Imagine you feel the view, and then you move your head a little bit.
Then what happens is when you move toward the left, the right-hand side gets cut off. When you move the other direction, the left-hand side gets cut off. That’s actually not a good experience. What you want to do is you want to put your content with some margin on the outside, so when you move, nothing gets cut off. This is also part of the reason you want a really wide field of view. Otherwise, when you move your head or body around, you get very surprising artifacts, and it’s not a good experience.
So far, I’ve been talking about the display, the visual aspect. I also want to talk about the audio aspect, spatial audio. Imagine someone calls you, you have a video call, and that person is staying in some position, and is world-locked. You also want the audio to feel as if it’s coming from the person. It’ll be disorienting if the video shows the person in that direction and it feels like the audio comes from the other direction, or if you wear a normal headphone, sometimes the audio comes from inside your head. It’s actually not a good experience.
The way spatial audio works is we can do the computation, and we can do fine adjustment of timing and signal strength between the two ears. It feels like the audio comes from a specific direction and specific location. Then you can imagine if you have multiple augments placed in the room, they all generate audio. You can actually identify the source of the audio. Spatial audio actually is also something extremely interesting and very important if you want the experience to feel natural.
That was world-locked rendering. It was very relevant in the AR glasses field. Another part are inputs. Usually when we think about compute, you think about keyboard, mouse, touchscreen, and of course, AR glasses, you don’t have any of that. You don’t have a keyboard to work with. You certainly don’t do touchscreen like this. How do you do input? How do you interact with the device? We end up using four different input modalities. We use eye tracking, hand tracking, EMG, and voice commands. It turns out not a single input modality is the best. The best actually is you fuse them together. They work in concert. You actually get the best user experience. It’s the most natural. How eye tracking works is you have a bunch of infrared illuminators shine infrared light on the eyeball.
Then you have inward-facing camera that reads the image and figures out the pattern of the infrared illumination. From the image, if you do enough computer vision and machine learning and just keep on improving the model and algorithm for years, you actually get really good accuracy figuring out where your eye is looking at. That’s great. From eye tracking, we can figure out what object you’re looking at in the physical world. If you have the virtual display, let’s say you have a panel and there’s some buttons on the display, you actually can figure out, the person is looking at the button. Therefore, the button can light up, showing the focus. Eye tracking is great for identifying what the user is paying attention to.
We already talked about EMG. Using EMG, we can figure out different gestures like tap here, tap here, tap and hold, or even some of the fine grain like up and down or swiping sideways, gestures like that. Combining that with eye tracking is really good, because eye tracking, it gave you the targeting and the focus information. It’s hard to use the information to activate something. It knows you’re looking at the button, but when do you want to click the button? You don’t want to use blinking to click the button, trust me. You use EMG. The gesture can trigger the action. EMG is great because your hand can actually hide behind your back in your pocket. It’s very discreet. You can do that without your neighbors knowing what you’re doing.
Another input modality actually is hand tracking. Like I said, there are four outward-facing cameras. They can figure out the geometry of the environment. It turns out those cameras are also pretty good at figuring out where the user’s hands are, the finger position, all that stuff. You may ask, you already have EMG, why do you need a hand? Hand tracking can give you very accurate positioning, and EMG can give you very accurate muscle movement.
The combination of them is actually very complimentary. If you think about eye tracking, hand tracking, plus EMG, they’re all low bandwidth communication channels. You’re probably communicating at the number of bits per second, whereas voice command, especially with the recent advancement in AI in recent years, voice command can communicate and issue commands with arbitrary complexity. You use hand tracking, eye tracking, and EMG for relatively mechanical interaction and use voice command for open-minded and arbitrary interaction. All four together is how we make the AR glasses interact with the human in a natural way. That’s what we end up doing.
Where is AR Going?
Where is AR going? This is my personal interpretation. I have a crystal ball at home, I just peek into it and figure out what’s going on in the next few years. My guess is, in the next few years, it’s a critical time. A different form factor actually will occupy the market. It’s not like one form factor will remain. What I mean is there will be a spectrum of products. The simplest being audio only glasses where there’s a microphone, there’s a speaker, pretty much nothing else. It doesn’t even have a camera. You may have seen some of the products on the market. Still, it’s useful. You can play music. You can interact with AI using voice. It’s still useful. The device is great because such a device you can build really low weight. You can reduce the number of grams of the device to the bare minimum, and it’s very comfortable to use. You can get a long battery life because audio doesn’t consume battery power that much. From there, you can take a step up. You can add camera. With the camera you can take a photo of the scenery, but also you can interact with AI in a good way.
For example, when I walk around London there are a lot of historical buildings. I want to know what they are. I want to know stories. If you have smart glasses with camera, you can actually ask AI, what is this building? Tell me the history. The AI will use the camera to take a photo, use the photo to identify what the building is. It can actually tell a story about the building. That’s great. Next step up, you can add a simple display. Not what we just talked about, 3D, fully holographic, but a simple display. It’s still useful. You can get notifications in discrete fashion. You can get some information, like for example, if you’re walking around and want to get the pedestrian navigation directions, you have to pull your phone out from time to time.
If you have glasses with a very simple display, then you don’t need to pull your phone out. You can get the direction right there. The final frontier is what we were talking about, fully holographic 3D AR glasses. They’re expensive to make, but I think they’re the future. In the next few years, there will be many devices from all kinds of companies on the entire spectrum. We’ll see who wins the market. That’s my prediction.
Lessons Learned
I want to talk about lessons learned when we did the project. I think the lesson learned here will apply pretty broadly to any complex project, not just AR glasses. The first thing I learned is be ambitious, but be ambitious selectively. Meaning, if you are doing work, running a project, producing a product, and you’re just doing the boring work, then the product will come out boring. If you have some ambition, something completely new, that’s what we all want to do. You do want to minimize the new tech investment. You heard what Mark Zuckerberg said. Orion took 10 years to develop, a whole decade. The reason it took 10 years was there were many NTIs, new technology investigations.
If you have one, you have reasonable chance. If you have two, the chance gets smaller, smaller, smaller. It’s a geometric sequence. You don’t want to have too many. From Gunpei Yokoi, he was a Japanese technical luminary, he had a famous saying. He said, “Lateral application of withered technology”. What does that mean? It means you find a piece of technology that’s well-developed, well-known, super cheap to make, but you find surprising and novel applications of the technology, and you build your product that way. That is the best way to build products. When you build your product, focus on the challenge you must solve, not the challenge you want to solve. Also, run lean, run mean, and reuse as much as possible.
A common question I get after developing or running AR glasses was actually, did you really need a new operating system? I’m like, let me unpack that. What do you mean by operating system? If you’re talking about the kernel, the device drivers, then, no, we actually didn’t. We’re using the Linux kernel. We’re actually running Android. If you think about the operating system as a set of APIs and the contract that application developers need to use in order to build the business logic, then the answer is yes. Why? Because there’s no standard API for eye tracking. There’s no standard API for hand tracking. There’s no standard API for EMG input. There’s no standard API to fuse all the input together and figure out how to trigger the 3D holographic output to put on your world in a world-locked rendered format. We need to build new APIs. We need to build a new contract. From that sense, yes, we had to build a new operating system. That’s one.
Orion was super complex. We had hundreds of engineers working on it for a long time. Definitely one way to manage the complexity is through testing. You’re probably familiar with the separation of unit testing, from integration testing, from end-to-end testing. In many ways, they’re like oversimplification. Sometimes you actually want to cut the dependency chain at different layers, so you may end up a spectrum of integration tests. Sometimes you actually want to do end-to-end testing in a different fashion, so you end up with many different end-to-end test suites.
The reason I say manage complexity through testing is if you can define component boundary and the work unit clearly, then you have a way to simplify and allow the complexity to be managed. Testing is the only way. After two decades of experience, it’s the only way to really define. All the APIs will get out of date. All the documentation will be deprecated, they go out of date. Tests are the only source of truth. You can also run stress testing, fault injection, fuzzing, they’re all very interesting and good technique to run testing.
Finally, if you do run hardware projects, then besides the actual hardware you use, it’s important you figure out how to use development boards, proxy devices that are not really yours, but you can run some code to figure out something, and emulators. Don’t go cheap and skip investment on this. They will save your life. Iteration speed is extremely important. You may start later than your competitor, but if your competitor iterates at a monthly cycle and you iterate at a weekly cycle or even daily cycle, then you can catch up very quickly. In order to improve iteration speed, keep it simple. Solid infrastructure and engineering quality really helps.
A lot of times the reason you cannot iterate fast is actually, we learned some code and now the product is broken. You have an engineering quality problem. Keep the engineering quality high. Do not run the code until it’s actually high quality. Use testing as a guardrail. Talking about engineering quality, I can tell you, prototyping code always finds its way into production. Don’t pretend, “I’m just writing some prototyping code. Don’t worry about testing. Don’t worry about the code coverage. Don’t worry about API cleanliness. It’s fine”. No, it’s not. It will become production code and you will regret, trust me. The decision to use more than one code repo or more than one build system is a big deal. Even if at the beginning you don’t feel that way, it will always come back and bite you.
Language choice really matters. I myself, I spent over two decades on C++, but no matter what experience you had with C++, you always run into a problem with memory safety and threading safety. Always. No exception. I’m not saying, don’t use C++. I’m just saying, if you do, just be planned. Plan to spend time debugging the nasty problems. If you use languages like Python, Java, JavaScript, TypeScript, chances are you will have less problems with memory safety and threading safety, but the performance might be an issue that you need to pay attention to. Again, use them, just be aware of the strengths and weakness, and be planned to spend time on performance. Rust is an interesting programming language. If you’re not familiar with Rust, I actually suggest you spend a little bit of time looking into it. You don’t necessarily have to use it, but just know what it’s capable of. It gave you bare metal performance like C++, but it, at compile time, got rid of a bunch of problems with safety issues in C++.
Oversimplified summary is, if the Rust code compiles, assuming that you’re not using the unsafe keyword, if the code compiles, then you don’t have undefined behavior in your code. That’s great. Also, regardless what you do, always invest in static analysis, linters, profilers, all that. Hardware-software co-design is important, but it’s also very expensive. Think twice before you decide to introduce your own hardware component. If you can build an app, if you can stay software only, do it that way. It’s much cheaper. You can iterate much faster. Chances are you will build an even better product experience. If you have to introduce hardware, do that. Make sure your hardware team and the software team talk to each other every day. Doing software-hardware co-design does move the needle on some immutable barriers.
For example, we talk about Orion. There’s no way we make this a hardware only project. There’s no way we make this a software only project. The only way is we grind on both hardware and software and co-design. They benefit from each other’s progress and make it work. Performance is a hard problem, always. Latency, throughput. What you want to do is you want to systematically track performance in a dashboard. You want to know the performance of your system. You want to build strong tooling and infra to really understand the performance characteristics of your code and your system. If you see a performance problem, you just need to put a dedicated focus. You may actually have to stop the program on everything else, let the team focus on performance for a while.
Technical leadership. Many of you are already in the technical leadership position. For the young people, you probably aspire to become a technical leader later. My two decades of experience, these four points are the most important. If you set the direction wrong and let the team run toward the wrong direction, the faster they run, the further away they get away from the goal. Set the direction right. Focus is extremely important, especially when you have a large team. Every person, every team, have their own favorite feature. It’s human nature for them to do their favorite stuff instead of doing what’s important.
If you are towards the end of the project, and your product crashes every hour, and there’s a team spending their weekends on the Easter egg they built in there, you have a problem. You just need the whole team to focus on the most important features and the work. Clarity, especially clarity in communication. We’re all experienced professionals, but still sometimes we don’t communicate clearly: sometimes, or often, or almost always. Try to write short documents.
If you see a 30-page document, think about how to reduce that to 3. If you see a 3-page document, think about how to reduce to 1. Make sure people who read the document know exactly what’s happening. Make sure the decisions are communicated clearly. The last one, willing to make tradeoffs. We’re always in a fog of war. There’s always incomplete information. There are always conflicting priorities. If you don’t make tradeoff decisions, your product becomes a mess. Be willing to make tradeoff choices.
See more presentations with transcripts
Search
RECENT PRESS RELEASES
Related Post
