Skip to main content

Driverless Cars Will Face Moral Dilemmas

Autonomous vehicles may put people in life-or-death situations. Will the outcomes be decided by ethics or data?

Credit:

Getty Images

A self-driving car carrying a family of four on a rural two-lane highway spots a bouncing ball ahead. As the vehicle approaches a child runs out to retrieve the ball. Should the car risk its passengers’ lives by swerving to the side—where the edge of the road meets a steep cliff? Or should the car continue on its path, ensuring its passengers’ safety at the child’s expense? This scenario and many others pose moral and ethical dilemmas that carmakers, car buyers and regulators must address before vehicles should be given full autonomy, according to a study published Thursday in Science.

The study highlights paradoxes facing carmakers, car buyers and regulators as driverless technology accelerates. Most of the 1,928 research participants in the Science report indicated that they believed vehicles should be programmed to crash into something rather than run over pedestrians, even if that meant killing the vehicle’s passengers. “The algorithms that control [autonomous vehicles] will need to embed moral principles guiding their decisions in situations of unavoidable harm,” according to the researchers at Massachusetts Institute of Technology, the University of Oregon and France’s Toulouse School of Economics for the National Center for Scientific Research.

Yet many of the same study participants balked at the idea of buying such a vehicle, preferring to ride in a driverless car that prioritizes their own safety above that of pedestrians. The researchers concluded that if lawmakers were to prioritize pedestrians over passengers when regulating self-driving vehicles, people would be less likely to buy those vehicles. A shrinking market for driverless cars would slow their development despite research showing that autonomous vehicles could potentially reduce traffic, cut pollution and save thousands of lives each year—human error contributes to 90 percent of all traffic accidents.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The researchers based their survey queries largely on an ethics thought experiment known as “the trolley problem,” according to Azim Shariff, an assistant professor of psychology at the University of Oregon and director of the Culture and Morality Lab at the University of California, Irvine. There are several variations on the trolley problem but they mostly pose hypothetical scenarios in which a trolley is on course to run over a group of people. A person watching the events unfold must choose between an intervention that sacrifices one person for the good of the group or protects an individual at the expense of the group. Shariff conducted the research along with Jean-François Bonnefon, a Toulouse School of Economics psychological scientist, and Iyad Rahwan, an associate professor in the MIT Media Lab.

Some observers say a key flaw in the Science study is that it does not take into account how the artificial intelligence being developed to control driverless vehicles actually works. “This question of ethics has become a popular topic with people who don’t work on the technology,” says Ragunathan “Raj” Rajkumar, a professor of electrical and computer engineering in Carnegie Mellon University’s CyLab and veteran of the university’s efforts to develop autonomous vehicles, including the Boss SUV that won the DARPA 2007 Urban Challenge. “AI does not have the same cognitive capabilities that we as humans have,” he adds. Rajkumar was not involved in the Science study.

Instead, autonomous vehicles make decisions based on speed, weather, road conditions, distance and other data gathered by a variety of sensors, including cameras, LiDARS and radars. A driverless car will calculate a course of action based on how fast it is traveling as well as the speed of an object in its path, for example. The main challenge is in gathering and processing the necessary data quickly enough to avoid dangerous circumstances in the first place. Rajkumar acknowledges that this will not always be possible but he is skeptical that in such cases it will come down to the vehicle essentially deciding who lives and who dies. “The bigger concern I have about autonomous vehicles is the ability to keep them protected from hackers who might want to take over their controls while someone is onboard,” he adds.

Shariff and his colleagues likewise acknowledge that their discussion of driverless vehicle moral dilemmas is a work a progress. They launched a Web site on Thursday called Moral Machine to help gather more information about how people would prefer autonomous cars to react in different scenarios where passenger and pedestrian safety are at odds. The site lets participants compare their responses and even offers the ability to construct new scenarios by tinkering with the number and type of people involved and whether they are obeying traffic laws at the time of the accident.