Skip to main content

Mind-Reading Computers That Can Translate Thoughts into Words

In his latest book, Adam Piore explores how bioengineers are harnessing the latest technologies to unlock untapped abilities in the human body and mind, like translating neural brain patterns of thoughts into written words

Excerpted from The Body Builders: Inside the Science of the Engineered Human by Adam Piore. Copyright © 2017 Adam Piore. With permission of the publisher, HarperCollins. All rights reserved.

It’s a frigid February afternoon, and I’m sitting in a hospital room in downtown Albany, New York, as a team of white-jacketed technicians bustle about the bed of a 40-year-old single mother from Schenectady, named Cathy. And they are getting ready to push the outer bounds of computer-aided “mind reading.” They are attempting to decode “imagined speech.”

I have been led here by Gerwin Schalk, a gregarious, Austrian-born neuroscientist, who has promised to show me just how far he and other neurological codebreakers have travelled since that day decades ago when David Hubel and Torsten Wiesel made history by listening in—and decoding—the patterns of neurons firing in a cat’s visual cortex.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Cathy is epileptic and plans to undergo brain surgery to try to remove the portion of her brain that is the source of her seizures. Three days ago, doctors lifted off the top of Cathy’s skull, and placed 117 tiny electrodes directly onto the right surface of her naked cortex so they could monitor her brain activity and map the target area. While she waits, she has volunteered to participate in Schalk’s research.

Now, next to my chair, Cathy is propped up in a motorized bed. The top of Cathy’s head is swathed in a stiff, plaster-like, mold of bandages and surgical tape. And a thick jumble of mesh-covered wires protrudes from the opening at the top of her skull. It flops over the back of her hospital bed, drops down to the ground and snakes over to a cart holding $250,000 worth of boxes, amplifiers, splitters and computers.

An attendant gives a signal, and Cathy focuses on a monitor sitting on the table in front of her as a series of single words emanate in a female monotone from a pair of nearby speakers.

“Spoon…”

“Python…”

“Battlefield…”

After each word, a colored plus sign flashes on Cathy’s monitor--Cathy’s cue to repeat each word silently in her head. Cathy’s face is inscrutable. But as she imagines each word, the 117 electrodes sitting atop her cortex record the unique combination of electrical activity emanating from 100s of millions of individual neurons in an area of her brain called the temporal lobe. Those patterns shoot through the wires, into a box that amplifies them, and then into the computer, where they are represented in the peaks and valleys of stacked, horizontal lines scrolling across the screen in front of the technician. Buried somewhere in that mass of squiggly lines, so thick and impenetrable it resembles a handful of hair pulled taut with a brush, is a logical pattern, a code that can be read if one understands the mysterious language of the brain.

Later Schalk’s team at the Wadsworth Center, a public health laboratory of the New York State Department of Health, along with collaborators at UC Berkeley, will pore over the data. Each one of Cathy’s electrodes records the status of roughly 1 million neurons, roughly 10 times a second, creating a dizzying blizzard of numbers, and combinations and possible meanings.

Yet Schalk insists he and his team can solve the puzzle and, using modern computing power, extract from that mass of data the words that Cathy has imagined.

It’s an effort Schalk has been pursuing for more than last decade. As part of a project originally funded by the Army Research Office, Schalk and others found evidence that when we “imagine” speaking, the auditory cortex, perhaps as an error-correction reference, receives a copy of how every word we speak should sound. That holds true even when we simply imagine saying a word.

Since that discovery, Schalk and his collaborators have demonstrated they can sometimes tell the difference between imagined vowels and consonants about 45 percent of the time. Chance is 25 percent. Rather than attempt to push those numbers up towards 100 percent, Schalk has focused on showing he can differentiate between vowels and consonants embedded in words. Then individual phonemes. And that’s not all.

From Cathy’s bedside, I follow Schalk to his office. On a large screen, Schalk pulls up a mass of brain signals, squiggly lines and different kinds of charts. Then he flips on some speakers. Over the course of many months, Schalk explains, he carried speakers into hospital rooms and played the same segment of a Pink Floyd song for about a dozen brain surgery patients like Cathy. Then Schalk handed the file of their recorded brain activity over to the UC Berkeley lab of Robert Knight for processing, to see if they might decode it.

Schalk presses a button. A bass begins to thump urgently like the furious beating of a human heart from a nearby speaker. It’s slightly muffled, as though heard from underwater, but it’s clearly a bass. A plaintive guitar echoes through an effects pedal, its notes accelerating with each new phrase. I recognize the song immediately—it is the mesmerizing, and haunting tones from “another Brick in the Wall,” on Pink Floyd’s the Wall. Aside from the vague muffling, the song is identical to the song I used to listen to in High School. But this version comes from brainwaves, not music.    

“Is it perfect?” Schalk asks. “No. But we not only know that he’s hearing music, you know the song. It used to be science fiction, but it’s not anymore.”

This feat is possible thanks to the discovery that different groups of neurons in the auditory cortex fire more robustly in response to specific tones, and amplitudes. Hit an individual neuron’s sweet spot in the auditory cortex by playing the right tone and it fires robustly. Move away from a neuron’s preferred tone, and the neuron’s firing rate will slow. By training pattern recognition algorithms, Schalk and his collaborators have taught computers to “translate” the neural firing patterns in the auditory cortex back into sound.

Schalk and his Berkeley collaborators are now attempting to discern whether patients are imagining reciting the Gettysburg Address, JFK’s Inaugural Address, or the nursery rhyme Humpty Dumpty just by looking at brain data—and attempting to reproduce it artificially using the same techniques. Eventually they hope to use it to decode the imagined speech of volunteers like Cathy—and eventually patients who are fully locked in and have lost the ability to speak.

Adam Piore is a freelance journalist. His last article for Scientific American examined the movement to bring evolution back to the classroom.

More by Adam Piore