Skip to main content

Facebook Launches "Moon Shot" Effort to Decode Speech Direct from the Brain

Can the social media giant’s bold claims live up to the hype?

As if Facebook wasn’t already pervasive enough in everyday life, the company’s newly formed Building 8 “moon shot” factory is working on a device they say would let people type out words via a brain–computer interface (BCI). If all goes according to plan—and that’s a big if—Building 8’s neural prosthetic would strap onto a person’s head, use an optical technique to decode intended speech and then type those thoughts on a computer or smartphone at up to 100 words per minute. This would be an order-of-magnitude faster than today’s state-of-the-art speech decoders.

The use of light waves to quickly and accurately read brain waves is a tall order, especially when today’s most sophisticated BCIs, which are surgically implanted in the brain, can translate neural impulses into binary actions—yes/no, click/don’t click—at only a fraction of that speed. Still, Facebook has positioned its Building 8 as an advanced research and development laboratory launched in the model of Google’s X, the lab behind the Waymo self-driving car and Glass augmented-reality headset. So it is no surprise Building 8’s first project out of the gate proposes a pretty far-fetched technology to tackle a problem that neuroscientists have been chipping away at for decades.

Here’s how the proposed device would work: the BCI will use optical fibers to direct photons from a laser source through a person’s skull into the cerebral cortex, specifically those areas involved in speech production. The BCI would “sample groups of neurons [in the brain’s speech center] and analyze the instantaneous changes in optical properties as they fire,” says Regina Dugan, head of Building 8 and a former executive at both Google and the Defense Advanced Research Projects Agency (DARPA). Light scattering through the neurons would reveal changes in their shape and configuration as the brain cells and their components—mitochondria, ribosomes and cell nuclei, for example—move.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Building 8’s BCI would measure the number and type of photons bounced off of the neurons in the cortex and send that information—wirelessly or via a cable—to a computer that uses machine-learning software to interpret the results. That interpretation would then be typed as text on the screen of a computer, smartphone or some other gadget. The speech production network in your brain executes a series of planning steps before you speak, says Mark Chevillet, Building 8’s technical lead on the BCI project. “In this system we’re looking to decode neural signals from the stage just before you actually articulate what you want to say.”

Because the researchers are focusing on a very specific application—speech—they know the prosthetic’s sensors must have millimeter-level resolution and be able to sample brain waves at about 300 times per second in order to measure the brain’s speech signals with high fidelity, Dugan says. “This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share by sending them to the speech [production] center of your brain,” she says. The brain’s speech centers usually refer to Wernicke’s area (speech processing) and Broca’s area (speech production). The latter then sends output to the motor cortex to produce the muscle movements that result in speech.

Chevillet and Dugan position the project as a potential communication option for the large numbers of people who suffer from amyotrophic lateral sclerosis (ALS) and other conditions that prevent them from being able to type or even speak. Furthermore, Dugan points out that interface would also offer a “more fluid human–computer interface” that supports Facebook’s efforts to promote augmented reality (AR). “Even a very simple capability to do something like a yes/no brain click would be foundational for advances in AR,” Dugan says. “In that respect it becomes a bit like the mouse was in the early computer interface days. Think of it like a ‘brain mouse.’”

For all of that to happen, Building 8 must develop a BCI that fits over the head while also being able to produce the high-quality signals needed to decode neural activity into speech, says Chevillet, a former program manager of applied neuroscience at Johns Hopkins University. He and his team want to build a modified version of the functional near-infrared spectroscopy (fNIRS) systems used today for neuroimaging. Whereas conventional fNIRS systems work by bouncing light off a tissue sample and analyze all of the returning photons no matter how diffuse, Building 8’s prosthetic would detect only those photons that have scattered a small number of times—so-called quasi-ballistic photons—in order to provide the necessary spatial resolution.

Additional challenges remain if and when Chevillet’s team can deliver their proposed prosthetic. One is whether the changes in the returning light will create patterns unique enough to represent each of the letters, words and phrases needed to translate brain waves into words on a screen, says Stephen Boppart, director of the University of Illinois at Urbana–Champaign’s Center for Optical Molecular Imaging. If that is possible, you might be able to train a person to generate different thought patterns over time that would correspond to a particular word or phrase, “but that hasn’t really been demonstrated,” he says.

Dugan and Chevillet acknowledge the obstacles but say they intend to build on key research related to their work. One recent study, for example, demonstrated several paralyzed individuals could communicate using signals recorded directly from parts of the motor cortex that control arm movements, achieving some of the fastest brain-typing speeds to date (ranging from three to eight words per minute). Another study showed machine learning can successfully decode information from neural signals. Both projects, however, relied on electrodes placed in or on the surface of the brain.

Chevillet’s team hopes to have a good idea of the technology needed to create their new optical prosthetic within two years, although it is unclear when they might build a working prototype. To meet these ambitious goals Building 8 has, over the past six months, recruited at least 60 scientists and engineers from the University of California, San Francisco; U.C. Berkeley; Johns Hopkins University’s Applied Physics Laboratory; Johns Hopkins Medicine; and Washington University School of Medicine in Saint Louis who specialize in machine-learning methods for decoding speech and language, optical neuroimaging systems and advanced neural prosthetics, Dugan says.

Regardless of whether Building 8 succeeds in delivering its BCI prosthetic, Facebook’s investment in the project is a big win for science, says Adam Gazzaley, founder and executive director of U.C. San Francisco’s Neuroscape translational neuroscience center. “We have increasing struggles to squeeze money out of the National Institutes of Health, especially to do high-risk, high-reward projects like what Facebook is describing,” says Gazzaley, who is not involved in the Building 8 research. “It’s a great sign and should be encouraged and applauded if large companies in the consumer space are taking such serious efforts to be innovative in neuroscience.”