Skip to main content

The Inner Lives of Robots: An Interview with Filmmaker Alex Garland

The writer–director of Ex Machina talks robot consciousness, mass surveillance and trying to wrap his head around the multiverse

Like self-replicating machines, robot movies are taking over Hollywood. Yet Ex Machina, which opened last Friday in New York and Los Angeles, is so smart and stylish that it stands out among the swarm. In the film, a 26-year-old coder named Caleb Smith (Domhnall Gleeson), who works for the online search giant Blue Book, wins a chance to spend a week with the company founder, Nathan Bateman (Oscar Isaac) at his compound in Alaska. A week with the boss might not sound like the greatest prize, but Nathan’s compound, filmed among the fjords and glaciers of Norway, is like the ultimate eco-lodge, and Nathan has something to show Caleb that any geek would sign the mother of all do-not-disclose agreements to see—a sentient humanoid robot named Ava.

Caleb is there to test Ava in a series of interviews. It’s up to him to decide whether Ava is indeed a sentient being. And if she fails the test? She gets an existential firmware upgrade. Ava—played by the show-stealing Swedish actress Alicia Vikander—seems to know this, so her sessions with Caleb quickly turn into seductive cerebral combat. Meanwhile, Caleb comes to see Nathan as an abusive, binge-drinking megalomaniac. As you might guess, this triangle falls apart pretty quickly. 

Ex Machina was written and directed by Alex Garland, who also wrote The Beach, 28 Days Later, and Sunshine, among other films. It is his directorial debut. Last Friday, he was in town from London for the New York premiere, and I sat down with him at the Bowery Hotel to talk about this brainy, highly enjoyable movie.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Scientific American: There are a lot of big, up-to-date ideas in this film. Can you tell me about the research process?

Alex Garland: I'm just interested in science, and I try to keep track of what's going on and get my head around it—inflation, the multiverse, whatever. It's very hard for me, because I don't have a scientific background, and I wasn’t any good at science at school, but all of that stuff I just find incredibly attractive and fascinating.

In this particular instance I have a friend who's very smart, and his real area of interest is neuroscience. He belongs very squarely with the body of thought, as argued by people like Roger Penrose, that say there's something about consciousness that we don't understand, and when we do understand it we'll find that it precludes the possibility of a machine being sentient. On a completely instinctive and intuitive level I didn't like that. I have to be careful here, because the idea of me arguing against Roger Penrose is ridiculous—and I'm fully aware of that, just to be clear. So I started reading about AI in order to keep pace with this smarter and better-educated friend of mine.

Eventually I came across this book written by a guy called Murray Shanahan. He's a professor of cognitive robotics at Imperial, back in London. And he wrote a book [Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds] saying, “Look, let's approach consciousness in a kind of physicalist way and just keep moving forwards until we maybe bump into a brick wall of metaphysics. But I don't think it’s going to happen." And I found there were a bunch of other people who were arguing the same kind of thing. And it was while trying to understand the arguments that these people were putting forward that the idea for this film then arrived, where I felt there was a way of dramatizing some of the arguments within what you hope is a compelling narrative. 

SA: You sent the script to Murray Shanahan for vetting, right?

Garland: Yeah, and a couple of other people too. I worked on a film called Sunshine, which directly flowed from an article I read in Scientific American. I don't remember what it was called but it was about the long-term future of the universe—entropy stuff, heat death and things like that. The key point in it didn't actually relate to the sun; I made [the film] about the sun for other reasons. It was more about the inevitability of extinction. So an idea arrived from about a man who had been tasked with saving mankind by some spurious notion of restarting the sun who decides that it's unethical to delay extinction because you're delaying the horror of extinction from you and your generation to your great, great, great, great grandchildren. I never got the science of Sunshine checked in the way I should have, and I always regretted it. It was hazy and fuzzy and it didn't stand up to any kind of scrutiny. And if I'd written it properly then it would have, and it would have been better, right? And in this film I didn't want to make the same mistake. So I always wanted to find people who could challenge it.

SA: What was that process of vetting the script like? Can you tell me an example of something that he pushed back on that you then changed?

Garland: There was one example, for instance, relating to the nature of the Turing test. When I showed this script to one of my challengers, he pointed out that the Turing test needs blind controls in order to function properly, and that led to me writing a speech where someone was saying, "This is not a Turing test. She's going to pass the Turing test. This is what happens after the Turing test.” And we’re now talking about whether she’s conscious and self-aware, not whether she can trick you down a phone line or behind a closed door.

I wanted to get this stuff right, because this is a film of ideas. If the ideas are badly or incorrectly expressed, the film is crap and it ceases to have any value in the terms I'm interested in.

One of the interesting things that flows from being unable to keep up with what is happening in science –for example, where the multiverse is concerned—is that I can understand aspects of it, and I can have my own internal sense, but I cannot have a debate with someone on a meaningful level about that because I'm not equipped to. Sorry I've gone off down this tangent again. I think I'm probably quite self-conscious while I'm talking to you of wanting to be qualified and reasonable in what I'm saying.

SA: Well, I’m a layperson myself, a journalist. I interview people and translate their ideas.

Garland: That's what I see my job as too. That's exactly how I perceived my job with this film. I think that there are areas of science that are floating away from the population in terms of their ability to access it. And there's a kind of vacuum appearing between the conversations that some people are having and the laypeople. And it's people like your magazine that are trying to bridge that gap, and the gap's getting harder and harder to bridge. I wanted to join up with that group of people in trying to do it as best I can.

SA: Did anybody ever try to tell you that you needed to tone down the intellectual level of the film because you were going to lose people?

Garland: No, because by the time I got round to making this film—I've been working on this film now for about 15 years—I knew that in order to tell the film at the level I felt I had to try to tell it, that means you make it cheaply. So we shot it for a budget that allowed that conversation. My issue was never to do with dumbing down; my issue was to do with my ability to understand and to reasonably convey these points.

SA: One of those points involves a specific anxiety of our time, which is about surveillance. Ava is trained by surveillance on the entire human race, basically.

Garland: Yeah, I suspect that the current rash of AI movies has got nothing to do with AIs; it's actually to do with anxiety about surveillance. And that stuff was changing a lot while I was working on the film. The Snowden revelations happened after we were in production. It's weird, in the financing of the film there was one person who queried something and said, "I'm not sure people will buy this." And the thing they didn't buy was that your mobile phone could be used to gather this data about you. And I was thinking, what? You don't buy that a mobile phone will do that, but you do buy this walking, talking robot? But then the Snowden stuff landed and it turned out to be bigger than we could have even dreamed.

SA: This is built into the structure of the movie, because Ava’s inventor is a tech megalomaniac who runs a sort of super-Google search company.

Garland: Yeah, because where else would this guy come from? It's no surprise that it's the big tech companies that are buying up all of the really interesting AI companies and financing them. I don't want to sound alarmist about the big tech companies. I see them as being like NASA in the 1960s: they're doing work I desperately want done. It's great to go to the moon, and it's great to invest huge amounts of money into the Deep Mindprogram and stuff like that. My ambivalence is just because they're powerful, and anything which is really, really powerful, regardless of whether they are doing anything bad at that moment, you just have watch them.

SA: You’ve said that you don't fear AIs as much as some others do.

Garland: What I don't feel comfortable about is making a blanket statement of alarm about them. It's perfectly reasonable to say that AIs are potentially dangerous. That seems to me like a statement of fact. And the film draws a parallel between the atomic bomb, nuclear power and AI. There's a latent danger in both.

I do agree with that statement that says if something is possible, then we'll probably end up doing it, in which case the debate shouldn't really be about should we or should we not be trying to create sentient machines—it should be about what are we going to do if we create sentient machines. Because if it's possible it's going to happen, and if it's not possible it isn't going to happen. So forget about the should or shouldn't; deal with the consequences or lack of consequences.

In terms of the alarmist thing—I kind of like nuclear power. I feel much the same way about AIs. I find it perfectly reasonable, for example, to suggest that an AI running the British National Health Service might do a very good job in choosing how drugs are allocated and where money is spent in a way that's free from certain kinds of political pressures. I'm not saying that an AI would do a better job. I just find it plausible that it might.

SA: The film does seem to say that if we do create AIs, we won't be able to control them.

Garland: From my point of view it’s saying something else, which is if you create them you shouldn't be trying to control them, because they would come with rights. If they're sentient, they have rights. You know, there was a big collection of top-level AI scientists in Puerto Rico in January of this year, and a letter then got issued which was collectively signed. Basically it said we would ensure that AIs do what we want them to do. And I was thinking, yeah—provided they're not self-aware. Look, when AIs come up they're not going to be like us. A self-aware, sentient AI is not going to be like a human. And I don't want to say an AI is going to do our bidding, because there are things about that that I feel uncomfortable with. 

Seth Fletcher is chief features editor at Scientific American. His book Einstein's Shadow (Ecco, 2018), on the Event Horizon Telescope and the quest to take the first picture of a black hole, was excerpted in the New York Times Magazine and named a New York Times Book Review Editor's Choice. His book Bottled Lightning (2011) was the first definitive account of the invention of the lithium-ion battery and the 21st century rebirth of the electric car. His writing has appeared in the New York Times Magazine, the New York Times op-ed page, Popular Science, Fortune, Men's Journal, Outside and other publications. His television and radio appearances have included CBS's Face the Nation, NPR's Fresh Air, the BBC World Service, and NPR's Morning Edition, Science Friday, Marketplace and The Takeaway. He has a master's degree from the Missouri School of Journalism and bachelor's degrees in English and philosophy from the University of Missouri.

More by Seth Fletcher