Skip to main content

Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding

All of us, even physicists, often process information without really knowing what we’re doing

Like great art, great thought experiments have implications unintended by their creators. Take philosopher John Searle’s Chinese room experiment. Searle concocted it to convince us that computers don’t really “think” as we do; they manipulate symbols mindlessly, without understanding what they are doing.

Searle meant to make a point about the limits of machine cognition. Recently, however, the Chinese room experiment has goaded me into dwelling on the limits of human cognition. We humans can be pretty mindless too, even when engaged in a pursuit as lofty as quantum physics.

Some background. Searle first proposed the Chinese room experiment in 1980. At the time, artificial intelligence researchers, who have always been prone to mood swings, were cocky. Some claimed that machines would soon pass the Turing test, a means of determining whether a machine “thinks.”


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Computer pioneer Alan Turing proposed in 1950 that questions be fed to a machine and a human. If we cannot distinguish the machine’s answers from the human’s, then we must grant that the machine does indeed think. Thinking, after all, is just the manipulation of symbols, such as numbers or words, toward a certain end.

Some AI enthusiasts insisted that “thinking,” whether carried out by neurons or transistors, entails conscious understanding. Marvin Minsky espoused this “strong AI” viewpoint when I interviewed him in 1993. After defining consciousness as a record-keeping system, Minsky asserted that LISP software, which tracks its own computations, is “extremely conscious,” much more so than humans. When I expressed skepticism, Minsky called me “racist.”

Back to Searle, who found strong AI annoying and wanted to rebut it. He asks us to imagine a man who doesn’t understand Chinese sitting in a room. The room contains a manual that tells the man how to respond to a string of Chinese characters with another string of characters. Someone outside the room slips a sheet of paper with Chinese characters on it under the door. The man finds the right response in the manual, copies it onto a sheet of paper and slips it back under the door.

Unknown to the man, he is replying to a question, like “What is your favorite color?,” with an appropriate answer, like “Blue.” In this way, he mimics someone who understands Chinese even though he doesn’t know a word. That’s what computers do, too, according to Searle. They process symbols in ways that simulate human thinking, but they are actually mindless automatons.

Searle’s thought experiment has provoked countless objections. Here’s mine. The Chinese room experiment is a splendid case of begging the question (not in the sense of raising a question, which is what most people mean by the phrase nowadays, but in the original sense of circular reasoning). The meta-question posed by the Chinese Room Experiment is this: How do we know whether any entity, biological or non-biological, has a subjective, conscious experience?

When you ask this question, you are bumping into what I call the solipsism problem. No conscious being has direct access to the conscious experience of any other conscious being. I cannot be absolutely sure that you or any other person is conscious, let alone that a jellyfish or smartphone is conscious. I can only make inferences based on the behavior of the person, jellyfish or smartphone.

Now, I assume that most humans, including those of you reading these words, are conscious, as I am. I also suspect that Searle is probably right, and that an “intelligent” program like Siri only mimics understanding of English. It doesn’t feel like anything to be Siri, which manipulates bits mindlessly. That’s my guess, but I can’t know for sure, because of the solipsism problem.

Nor can I know what it’s like to be the man in the Chinese room. He may or may not understand Chinese; he may or may not be conscious. There is no way of knowing, again, because of the solipsism problem. Searle’s argument assumes that we can know what’s going on, or not going on, in the man’s mind, and hence, by implication, what’s going on or not in a machine. His flawed initial assumption leads to his flawed, question-begging conclusion.

That doesn’t mean the Chinese room experiment has no value. Far from it. The Stanford Encyclopedia of Philosophy calls it “the most widely discussed philosophical argument in cognitive science to appear since the Turing Test.” Searle’s thought experiment continues to pop up in my thoughts. Recently, for example, it nudged me toward a disturbing conclusion about quantum mechanics, which I’ve been struggling to learn over the last year or so.

Physicists emphasize that you cannot understand quantum mechanics without understanding its underlying mathematics. You should have, at a minimum, a grounding in logarithms, trigonometry, calculus (differential and integral) and linear algebra. Knowing Fourier transforms wouldn’t hurt.

That’s a lot of math, especially for a geezer and former literature major like me. I was thus relieved to discover Q Is for Quantum by physicist Terry Rudolph. He explains superposition, entanglement and other key quantum concepts with a relatively simple mathematical system, which involves arithmetic, a little algebra and lots of diagrams with black and white balls falling into and out of boxes.

Rudolph emphasizes, however, that some math is essential. Trying to grasp quantum mechanics without any math, he says, is like “having van Gogh’s ‘Starry Night’ described in words to you by someone who has only seen a black and white photograph. One that a dog chewed.”

But here’s the irony. Mastering the mathematics of quantum mechanics doesn’t make it easier to understand and might even make it harder. Rudolph, who teaches quantum mechanics and co-founded a quantum-computer company, says he feels “cognitive dissonance” when he tries to connect quantum formulas to sensible physical phenomena.

Indeed, some physicists and philosophers worry that physics education focuses too narrowly on formulas and not enough on what they mean. Philosopher Tim Maudlin complains in Philosophy of Physics: Quantum Theory that most physics textbooks and courses do not present quantum mechanics as a theory, that is, a description of the world; instead, they present it as a “recipe,” or set of mathematical procedures, for accomplishing certain tasks.

Learning the recipe can help you predict the results of experiments and design microchips, Maudlin acknowledges. But if a physics student “happens to be unsatisfied with just learning these mathematical techniques for making predictions and asks instead what the theory claims about the physical world, she or he is likely to be met with a canonical response: Shut up and calculate!”

In his book, Maudlin presents several attempts to make sense of quantum mechanics, including the pilot-wave and many-worlds models. His goal is to show that we can translate the Schrödinger equation and other formulas into intelligible accounts of what’s happening in, say, the double-slit experiment. But to my mind, Maudlin’s ruthless examination of the quantum models subverts his intention. Each model seems preposterous in its own way.

Pondering the plight of physicists, I’m reminded of an argument advanced by philosopher Daniel Dennett in From Bacteria to Bach and Back: The Evolution of Minds. Dennett elaborates on his long-standing claim that consciousness is overrated, at least when it comes to doing what we need to do to get through a typical day. We carry out most tasks with little or no conscious attention.

Dennett calls this “competence without comprehension.” Adding insult to injury, Dennett suggests that we are virtual “zombies.” When philosophers refer to zombies, they mean not the clumsy, grunting cannibals of The Walking Dead but creatures that walk and talk like sentient humans but lack inner awareness.

When I reviewed Dennett’s book, I slammed him for downplaying consciousness and overstating the significance of unconscious cognition. Competence without comprehension may apply to menial tasks like brushing your teeth or driving a car but certainly not to science and other lofty intellectual pursuits. Maybe Dennett is a zombie, but I’m not! That, more or less, was my reaction.

But lately I’ve been haunted by the ubiquity of competence without comprehension. Quantum physicists, for example, manipulate differential equations and matrices with impressive competence—enough to build quantum computers!—but no real understanding of what the math means. If physicists end up like information-processing automatons, what hope is there for the rest of us? After all, our minds are habituation machines, designed to turn even complex tasks—like being a parent, husband or teacher—into routines that we perform by rote, with minimal cognitive effort.

The Chinese room experiment serves as a metaphor not only for physics but also for the human condition. Each of us sits alone within the cell of our subjective awareness. Now and then we receive cryptic messages from the outside world. Only dimly comprehending what we are doing, we compose responses, which we slip under the door. In this way, we manage to survive, even though we never really know what the hell is happening.

Further Reading:

Is the Schrödinger Equation True?

Will Artificial Intelligence Ever Live Up to Its Hype?

Can Science Illuminate Our Inner Dark Matter