Skip to main content

Intelligent to a Fault: When AI Screws Up, You Might Still Be to Blame

Interactions between people and artificially intelligent machines pose tricky questions about liability and accountability, according to a legal expert

self-driving cars

Artificial intelligence is already making significant inroads in taking over mundane, time-consuming tasks many humans would rather not do. The responsibilities and consequences of handing over work to AI vary greatly, though; some autonomous systems recommend music or movies; others recommend sentences in court. Even more advanced AI systems will increasingly control vehicles on crowded city streets, raising questions about safety—and about liability, when the inevitable accidents occur.

But philosophical arguments over AI’s existential threats to humanity are often far removed from the reality of actually building and using the technology in question. Deep learning, machine vision, natural language processing—despite all that has been written and discussed about these and other aspects of artificial intelligence, AI is still at a relatively early stage in its development. Pundits argue about the dangers of autonomous, self-aware robots run amok, even as computer scientists puzzle over how to write machine-vision algorithms that can tell the difference between an image of a turtle and that of a rifle.

Still, it is obviously important to think through how society will manage AI before it becomes a really pervasive force in modern life. Researchers, students and alumni at Harvard University’s Kennedy School of Government launched The Future Society for that very purpose in 2014, with the goal of stimulating international conversation about how to govern emerging technologies—especially AI. Scientific American spoke with Nicolas Economou, a senior advisor to The Future Society’s Artificial Intelligence Initiative and CEO of H5, a company that makes software to aid law firms with pretrial analysis of electronic documents, e-mails and databases—also known as electronic discovery. Economou talked about how humans might be considered liable (even if a machine is calling the shots), and about what history tells us regarding society’s obligation to make use of new technologies once they have been proved to deliver benefits such as improved safety.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


[An edited transcript of the conversation follows.]

What are your main concerns about AI?

I’m a political scientist by training as well as an entrepreneur who has advocated for the progressive adoption of AI in the legal system. So I’m a big believer that AI can be a force for good. But I think that it needs to be governed, because it does pose risks. People talk about risk in different ways, but I’m most interested in the risk that involves surrendering to machines decisions that affect our rights, liberty or freedom of opportunity. We make decisions not just based on rational thinking but also values, ethics, morality, empathy and a sense of right and wrong—all things that machines don’t inherently have. In addition, people can be held accountable for their decisions in ways that machine cannot.

Who should be held accountable for AI’s decisions?

That gets at the issue of competence. In entirely autonomous AI systems, you could say: the manufacturer. But most AI today is not autonomous; it relies on a human operator. If that person is not knowledgeable enough to use AI properly when making important decisions in, say, medicine, the law or financial services, should the person be held accountable for errors? Consider the [2016] case State v. Loomis, where a judge relied in part on a black-box, secret algorithm to assess whether a defendant was at risk for recidivism. The algorithm assessed [Loomis] to be a high risk but the methodology that the AI used to produce the assessment was not disclosed to the court or the defendant. The judge factored that recommendation in coming up with a six-year prison sentence. The U.S. Supreme Court declined to hear that case, so it is now the law of the land. You can now be given a long prison sentence because, in part, of an AI algorithm’s assessment, without having much recourse. Was the judge competent to understand whether the algorithmic assessment was adequate and backed by sound empirical evidence? The answer is probably no, because judges and lawyers are not usually trained scientists.

How does one develop a transparent AI system that most people can understand?

It’s true that AI could have all the transparency in the world, but we as citizens couldn’t make heads or tails of what AI is doing. Maybe scientists can understand it, but to be empowered as citizens we need to know whether something is going to work in the real world. One model to consider is the automobile. The average person could watch a car being built from beginning to end and still not know whether the car is safe to drive. Instead, you trust that a car is safe because you consult the ratings provided by the Insurance Institute for Highway Safety, which crashes cars every day to determine how safe they are. [Therefore], as a citizen I now have information that I can use to assess a very complicated sociotechnical system that involves technology and human intelligence. I have very simple metrics that tell me whether a car is safe. Whereas transparency into algorithms is helpful, knowing if they are effective in the real-world applications for which they are intended—often in the hands of human operators—is the key.

When people are informed about AI risks, does that shift accountability or liability to the person using the AI?

Liability is a huge legal question. With self-driving vehicles, for example, you can look at how much control the driver has over the car. If the driver has none, then you’d expect the manufacturer or the other companies involved in putting together the car to have more liability and responsibility. It gets more complicated when the driver has more control, and you might look at who made the decision that led to a crash. There are a couple of interesting questions related to liability. Sticking with cars as the example, let’s presume [hypothetically] that if everyone were being driven in an autonomous car, we’d reduce traffic-related deaths by 20,000 per year. If that were the case, then the public policy goal would be to encourage people to use autonomous cars. But at the same time people are scared of the technology. So you could imagine a couple of ways of supporting your policy goal in order to save those 20,000 lives. One might be to design autonomous vehicles to prioritize the safety of their occupants over pedestrians and other drivers. In other words, you’re safer inside the car than outside of it. This would encourage people to overcome their fear of autonomous cars, thus supporting your policy goal. But society would then be assigning higher value to some lives—those inside those vehicles—than to others.

Another way to encourage people to use self-driving cars is, essentially, to argue that it’s irresponsible to drive a conventional car if you know that an AI-driven vehicle is safer. There’s a case from the 1930s known as the T. J. Hooper case, in which two barges were lost in a storm in part because the company that owned the boats did not equip them with radios. The decision was, if manifestly effective new technology is developed, then it is imperative to use it as a precaution. Eventually, would an automobile driver be more liable if that person chooses to drive rather than getting into statistically safer self-driving cars?

How can public policy be developed for AI when the technology itself is still evolving and being defined?

I’m not sure it’s very helpful to define AI. We still don’t have a universally accepted definition of what intelligence is, so it would be hard to do that for artificial intelligence. The norms that govern AI use should be broad enough that they accept innovations from wherever they come. But they should be narrow enough to provide meaningful constraints in how it is used and how it affects people. An effective process would have four layers: You start with values. What do we want AI to do for us?; From there you go to ethical principles. How should AI go about doing its work?; Then you can form public policy recommendations; and finally look at actual technical controls needed to implement that policy.