Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

March 17, 2026

Raffi Krikorian, Mozilla’s CTO and the former head of Uber’s self-driving car division, totaled his Tesla Model X while using “Full Self-Driving” on a residential street. His kids were in the back seat.

In a new essay published in The Atlantic, Krikorian breaks down the accident and offers what might be the most informed critique yet of the fundamental problem with Tesla’s approach to “supervised” autonomy — from someone who literally built self-driving systems for a living.

Krikorian describes a Sunday drive he had done hundreds of times — taking his son to a Boy Scouts meeting through Bay Area residential streets. His Tesla was in FSD mode, and the system was driving without issue until it suddenly wasn’t.

As the Model X entered a turn, FSD appeared to lose its bearings, Krikorian describes the wheel jerking erratically and the car decelerating without warning. He grabbed the steering wheel, but couldn’t recover in time. The car slammed into a concrete wall and was totaled. Krikorian suffered a concussion, a stiff neck, and days of headaches. His children were unharmed.

Advertisement – scroll for more content

What makes this account particularly striking is Krikorian’s background. At Uber’s Advanced Technologies Center, he ran the team building autonomous vehicles and trained human safety drivers on exactly when and how to intervene when a self-driving system fails. During his two years leading the division, Uber’s early pilot programs had zero injuries.

Despite all of that expertise, FSD still got him.

He writes that he started using FSD on highways, where clear lane markers and predictable traffic made sense. Then he tried it on local roads, it worked well, and it became habit. Before the crash, his hands were on the wheel. He was doing what Tesla asks drivers to do, monitor, not steer. But as he puts it, the system had conditioned him to trust it.

After the crash, his name was on the insurance report. Not Tesla’s. That is how every FSD crash works under the current legal framework, Tesla’s system is classified as Level 2, meaning the driver is responsible at all times.

Krikorian also raises a troubling point about Tesla’s data practices. The car constantly logs the driver’s hand position, reaction time, and eye tracking, and Tesla has used this data after crashes to shift blame onto drivers. Meanwhile, drivers who request their own data say they have received only fragments. In the landmark Florida wrongful-death case that resulted in a $243 million verdict, plaintiffs had to hire a hacker to recover critical evidence from the crashed vehicle’s computer chip because Tesla claimed the data couldn’t be found.

The most valuable part of Krikorian’s essay is his analysis of why “supervised” self-driving is fundamentally broken, a topic we’ve covered extensively at Electrek.

His core argument: Tesla is asking humans to supervise a system that is specifically designed to make supervision feel pointless. As he puts it, an unreliable machine keeps you alert, and a perfect machine needs no oversight, but one that works almost perfectly creates a trap where drivers trust it just enough to stop paying attention.

The research backs this up. Psychologists call it the “vigilance decrement”, monitoring a nearly perfect system is boring, boredom leads to mind-wandering, and drivers need 5 to 8 seconds to mentally reengage after an automated system hands control back. But emergencies unfold faster than that.

Krikorian cites an Insurance Institute for Highway Safety study showing that after just one month of using adaptive cruise control, drivers were more than six times as likely to look at their phones. Tesla’s own website warns FSD users not to become complacent, but the system’s smooth performance actively trains that complacency.

He points to two well-known crashes to illustrate the impossible math. In the 2018 Mountain View accident that killed Apple engineer Walter Huang, the driver had six seconds before his Tesla steered into a concrete median. He never touched the wheel. In the 2018 Uber crash in Tempe, Arizona, sensors detected a pedestrian with 5.6 seconds of warning, but the safety driver looked up with less than a second remaining.

In Krikorian’s own case, he did take action, but he was asked to snap from passenger back to pilot in a fraction of a second, overriding months of conditioning. The logs show he turned the wheel. They don’t show the impossible math of that transition.

The pattern Krikorian describes should sound familiar to anyone who has followed Tesla’s FSD controversies: condition the driver to rely on the system, erode their vigilance through months of smooth performance, then point to the terms of service and blame them when something breaks. When FSD works, Tesla gets credit. When it doesn’t, the driver gets blamed.

Krikorian also contrasts Tesla’s approach with a notable example of accountability from a competitor. In July 2025, BYD announced it would pay for damage caused by crashes involving its autonomous parking feature — no insurance claim required, no impact on the driver’s record. It’s a limited example, but it demonstrates that shared liability between automaker and driver is a choice, not an impossibility.

We’ve been saying this for years: Tesla’s FSD is getting more dangerous as it gets better. The smoother it gets, the more it lulls drivers into a false sense of security — and the harder it becomes to snap back when the system inevitably makes a mistake.

What makes Krikorian’s account so compelling is that he’s not some random Tesla critic. He built self-driving cars at Uber. He trained safety drivers on intervention protocols. He understood the risk intellectually, and he still got conditioned into complacency. If someone with that level of expertise can get caught, the average Tesla owner doesn’t stand a chance.

The “supervised” label is a legal shield, not a safety solution. Tesla knows that humans cannot reliably supervise a system that works 99% of the time — the research is clear, and Krikorian lays it out plainly. Yet the company continues to sell “Full Self-Driving” while pointing to the fine print when things go wrong.

With NHTSA currently investigating 80+ FSD incidents covering 2.88 million vehicles, and a growing flood of lawsuits following the $243 million Florida verdict, the pressure on Tesla to actually share liability for its system’s failures is mounting. BYD showed it’s possible. The question is whether Tesla will ever choose accountability over blame-shifting.

  

Search

RECENT PRESS RELEASES