Skip to main content

What NASA Could Teach Tesla about Autopilot’s Limits

Decades of research have warned about the human attention span in automated cockpits 

Tesla Model S.

Tesla Motors says the Autopilot system for its Model S sedan “relieves drivers of the most tedious and potentially dangerous aspects of road travel.” The second part of that promise was put in doubt by the fatal crash of a Model S earlier this year, when its Autopilot system failed to recognize a tractor-trailer turning in front of the vehicle. Tesla says the driver, Joshua Brown, also failed to notice the trailer in time to prevent a collision. The result? In Tesla’s own words, “the brake was not applied”—and the car plowed under the trailer at full speed, killing Brown.

Since news of Brown’s death broke in June, the public has been debating where the fault lies: with the driver, the company or the automation technology itself. But NASA has been studying the psychological effects of automation in cockpits for decades—and this body of research suggests that a combination of all three factors may be responsible. “If you think about the functionality of a cockpit, that could mean in an airplane, a space shuttle or a car,” says Danette Allen, director of NASA Langley Research Center’s Autonomy Incubator. “NASA, perhaps more than any other organization, has been thinking about autonomy and automation for a long time.”

Stephen Casner, a research psychologist in NASA’s Human Systems Integration Division, puts it more bluntly: “News flash: Cars in 2017 equal airplanes in 1983.”


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Casner is not just referring to basic mechanisms that keep the nose of the plane level, similar to cruise control in a car. He means, in his words, “the full package”: true autonomous flight, from just after takeoff up to (and even including) landing. “The first Madonna album had not come out yet when we had this technology,” Casner says. “And we are, 33 years later, having this very same conversation about cars.”

Here are three things about how humans and automated vehicles behave together that NASA has known for years—and to which Tesla may need to pay more attention.

The Limits of Being “On the Loop”

People often use the phrase “in the loop” to describe how connected someone is (or is not) to a decision-making process. Fewer people know that this “control loop” has a specific name: Observe, Orient, Decide, Act (OODA). The framework was originally devised by a U.S. Air Force colonel, and being “in” and “out” of the OODA loop have straightforward meanings. But as automation becomes more prevalent in everyday life, an understanding of how humans behave in an in-between state—known as “on the loop”—will become more important.

Missy Cummings, a former Navy fighter pilot and director of Duke University’s Humans and Autonomy Laboratory, defines “on the loop” as human supervisory control: "intermittent human operator interaction with a remote, automated system in order to manage a controlled process or task environment.” Air traffic controllers, for example, are on the loop of the commercial planes flying in their airspace. And thanks to increasingly sophisticated cockpit automation, most of the pilots are, too.

Tesla compares Autopilot with this kind of on-the-loop aviation, saying it “functions like the systems that airplane pilots use when conditions are clear.” But there’s a problem with that comparison, Casner says: “An airplane is eight miles high in the sky.” If anything goes wrong, a pilot usually has multiple minutes—not to mention emergency checklists, precharted hazards and the help of the crew—in which to transition back in the loop of control. (For more on this, see Steven Shladover's article, “What 'Self-Driving Cars Will Really Look Like,” from the June 2016 Scientific American.)

Automobile drivers, for obvious reasons, often have much less time to react. “When something pops up in front of your car, you have one second,” Casner says. “You think of a Top Gun pilot needing to have lightning-fast reflexes? Well, an ordinary driver needs to be even faster.”

In other words, the everyday driving environment affords so little margin for error that any distinction between “on” and “in” the loop can quickly become moot. Tesla acknowledges this by constraining the circumstances in which a driver can engage Autopilot: “clear lane lines, a relatively constant speed, a sense of the cars around you and a map of the area you’re traveling through,” according to MIT Technology Review. But Brown’s death suggests that, even within this seemingly conservative envelope, driving “on the loop” may be uniquely unforgiving.

The Limits of Attention

Of course, ordinary human negligence can turn even the safest automation deadly. That’s why Tesla says that Autopilot “makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected.”

But NASA has been down this road before, too. In studies of highly automated cockpits, NASA researchers documented a peculiar psychological pattern: The more foolproof the automation’s performance becomes, the harder it is for an on-the-loop supervisor to monitor it. “What we heard from pilots is that they had trouble following along [with the automation],” Casner says. “If you’re sitting there watching the system and it’s doing great, it’s very tiring.” In fact, it’s extremely difficult for humans to accurately monitor a repetitive process for long periods of time. This so-called “vigilance decrement” was first identified and measured in 1948 by psychologist Robert Mackworth, who asked British radar operators to spend two hours watching for errors in the sweep of a rigged analog clock. Mackworth found that the radar operators’ accuracy plummeted after 30 minutes; more recent versions of the experiment have documented similar vigilance decrements after just 15 minutes.

These findings expose a contradiction in systems like Tesla’s Autopilot. The better they work, the more they may encourage us to zone out—but in order to ensure their safe operation they require continuous attention. Even if Joshua Brown was not watching Harry Potter behind the wheel, his own psychology may still have conspired against him.

According to some researchers, this potentially dangerous contradiction is baked into the demand for self-driving cars themselves. “No one is going to buy a partially-automated car [like Tesla’s Model S] just so they can monitor the automation,” says Edwin Hutchins, a MacArthur Fellow and cognitive scientist who recently co-authored a paper on self-driving cars with Casnerand design expert Donald Norman. “People are already eating, applying makeup, talking on the phone and fiddling with the entertainment system when they should be paying attention to the road,” Hutchins explains. “They’re going to buy [self-driving cars] so that they can do more of that stuff, not less.”

Automation and Autonomy: Not the Same Thing

Tesla’s approach to developing self-driving cars relies on an assumption that incremental advances in automation will one day culminate in “fully driverless cars.” The National Highway Traffic Safety Administration (NHTSA) tacitly endorses this assumption in its four-level classification scheme for vehicle automation: Level 1 refers to “invisible” driver assistance like antilock brakes with electronic stability control. Level 2 applies to cars that combine two or more level 1 systems; a common example is adaptive cruise control combined with lane centering. Level 3 covers “Limited Self-Driving Automation” in cars like the Model S, where “the driver is expected to be available for occasional control but with sufficiently comfortable transition time.” Level 3, warns Hutchins, “is where the problems are going to be”—but not because partial automation is inherently unsafe. Instead, he says, the danger lies in assuming that “Full Self-Driving Automation”—level 4 on NHTSA’s scale—is a logical extension of level 3. “The NHTSA automation levels encourage people to think these are steps on the same path,” Hutchins explains. “I think [level 3 automation] is actually going in a somewhat different direction.”

Technology disruptors like Google and traditional carmakers like Ford and Volvo seem to agree. Both groups appear determined to sidestep level 3 automation entirely, because of its potential for inviting “mode confusion” in ambiguous situations. Mode confusion was made tragically famous by the Air France 447 disaster, in which pilots were unaware that the plane’s fly-by-wire safety system had disengaged itself. (A less grim illustration of mode confusion can be seen in this clip from Anchorman 2, where Ron Burgundy grossly misunderstands the capabilities of cruise control.)

Given the state of research into automated vehicle operation—and the ongoing NHTSA investigation of Brown’s crash—it is premature to fault either Tesla or Brown individually. And although any automated system that can log more than 200 million kilometers of driving without a fatality—as Autopilot has—is an amazing achievement, level 3 automation may simply possess properties that make it unsuitable for cars, even as it functions reliably in aviation and other contexts. But whereas understanding the psychological pitfalls around automation cannot bring Brown back, one hopes it might help prevent more deaths like his as self-driving cars continue to evolve.

John Pavlus is a writer and filmmaker focusing on science, technology and design. His work has appeared in Bloomberg Businessweek, MIT Technology Review, and The Best American Science and Nature Writing series. He lives in Portland, Ore.

More by John Pavlus