Skip to main content

A Challenge to the Textbooks on How We Learn about Our Surroundings

New experiments provide an alternative to a long-reigning theory of the way we form memories of experiences

Neuron in the hippocampus.

Donald Hebb was a famed Canadian scientist who produced key findings that ranged across the field of psychology, providing insights into perception, intelligence and emotion. He is perhaps best known, though, for his theory of learning and memory, which appears as an entry in most basic texts on neuroscience. But now an alternative theory—along with accompanying experimental evidence—fundamentally challenges some central tenets of Hebb’s thinking. It provides a detailed account of how cells and the electrical and molecular signals that activate them are involved in forming memories of a series of related events.

Put forward in 1949, Hebb’s theory holds that when electrical activity in one neuron—perhaps triggered by observing one’s surroundings—repeatedly induces a neighboring “target cell” to fire electrical impulses, a process of conditioning occurs and strengthens the connection between the two neurons. This is a bit like doing arm curls with a weight; after repeated lifts the arm muscle grows stronger and the barbell gets easier to hoist. At the cellular level, repeated stimulation of one neuron by another enables the target cell to respond more readily the next time it is activated. In basic textbooks, this boils down to a simple adage to describe the physiology of learning and memory: “Cells that fire together, wire together.”

Every theory requires experimental evidence, and scientists have toiled for years to validate Hebb’s idea in the laboratory. Many research findings have showed that when a neuron repeatedly fires off an electrical impulse (called an “action potential”) at virtually the same time as an adjacent neuron, their connection does indeed grow more efficient. The target cell fires more easily, and the signal transmitted is stronger. This process—known as long-term potentiation (LTP)—apparently induces physiological change or “plasticity” in target cells. LTP is routinely cited as a possible explanation for how the brain learns and forms memories at the cellular level.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


But long-term potentiation leaves a few open questions. When we encounter something new, the experience often occurs as a sequence of events over at least a matter of seconds—not tiny fractions of a second, as postulated for LTP—and somehow a memory still forms. Nor are many repeated exposures to an event necessarily needed for learning to occur: A child sees the alluring blue and yellow flame on the stove a few feet away. She approaches the stove, slowly raises a finger and then quickly pulls away the hand. Once is enough to learn this lesson for life.

A new paper published in Science on September 8 provides evidence for what Jeffrey Magee and other researchers at Howard Hughes Medical Institute’s Janelia Research Campus contend is a more plausible explanation for how a sequence of events may form a memory of a place. In their experiments, a mouse running down a track created a memory of a particular spot along the track—a “place field,” in neurospeak—over a period five seconds. The place field was implanted in an area of the brain called the hippocampus after as little as a single traversal of the track
The action took place in synapses, the tiny clefts between neurons where a signal passes from one cell to another.  Visual, tactile or other inputs from another part of the mouse’s brain passed through long neuron fibers called axons, crossing over to a target cell in an area called the hippocampus. The inputs trigger the production of a set of signals that persist for several seconds in tiny protrusions, called dendrites, on the hippocampal target cell. 

In this form of plasticity, the key signal in the hippocampal cell was not a sub-millisecond action potential, rather it was an electrical signal called a “plateau potential” in the dendrites of the target cell that lasts up to hundreds of times longer.  The plateau potential caused a relatively large burst of calcium to enter the target neuron’s membrane and this set off a chain of events that lead to molecular and structural changes within the cell itself.  After a mouse made just a few runs of the track—sometimes only one—the hippocampal neuron underwent this biochemical learning process and a place field was formed that became active when the mouse passed over the spot again. Thus the animal now “knew” this defined location along the track when the place field activated.

This newly discovered learning process differs in basic ways from the LTP concept long found in textbooks. LTP requires (as Hebb had predicted) that one neuron repeatedly sends an input signal that causes a nearby neuron to fire off submillisecond pulses. Magee and colleagues’ discovery—dubbed “behavioral timescale synaptic plasticity”—does not require such a cause-and-effect relationship. One neuron does not induce the firing of another.

Instead, input signals from elsewhere in the brain arrive at the hippocampal neuron several seconds before the calcium spike (the plateau potential) begins in the dendrites. These same input signals persist for several seconds after the plateau potential has ended. The entire five-second time course—the initial inputs followed by a plateau potential and then the inputs that continue afterward—corresponds to the same interval over which a set of actions may occur: the child sees the stove, approaches it, touches the flame and pulls back her hand.

What’s more, a new memory of what happened at a particular place is cemented in the brain after a mouse makes one or only a few runs along the track. The researchers also found that when a mouse goes back to the track after this learning process has taken place, a now-trained neuron fires before the animal actually arrives at the spot it has learned—suggesting the memory helps the brain piece out what lies in the physical path ahead.

Magee, the senior author on the study who is now at Baylor College of Medicine, says this new type of plasticity probably will not supplant long-term potentiation in the textbooks. But it may provide a more suitable explanation of how memories are formed from a connected series of events. It may also account for how the brain remembers important places: where a squirrel stores acorns for winter or where a hiker saw a snake on a trail, for instance. “There was always this nagging suspicion that something wasn't quite right about long-term potentiation, and that something was the timing requirement,” Magee says. “When you use it to evoke synaptic plasticity, you had to have this really tight timing window. But behavior actually occurs on these much longer timescales—even very simple behaviors.” Magee says his group’s findings still need to be replicated. And key questions remain, such as where in the brain the signals that serve as inputs to the dendrites originate.

If the work from Magee and his team is further confirmed, LTP may come to be thought of as a process that assists in keeping intact the memories formed by the new type of plasticity discovered by Magee’s group—or it may be found to be involved in simpler sensory detection processes that do not require the piecing-together of multiple events. Alcino Silva, a neuroscientist at University of California, Los Angeles, who was not involved with the research, calls the work “a groundbreaking study,” and says it “promises to change the way we think about how space is learned and remembered.” He adds that the study “is just a provocative beginning.” He notes the need for further research to ensure that this finding is “actually key to learning and memory. For example, it will be important to explore this form of plasticity, and then show that manipulating it can both interfere with and enhance specific forms of learning.”

Another researcher, György Buzsáki, a neuroscientist at New York University, also not involved in the study, says: “Overall, this is a significant step forward in our understanding of the mechanisms involved in place field generation in the hippocampus.” He adds that the neuroscience literature includes examples of various mechanisms for creating such place markers in an animal’s brain, including a study his own laboratory that conforms more closely to Hebb’s model.

The hippocampus, he says, can also store an internal sequence of events without any sensory inputs of physical surroundings—mental imagery of moving about a place one has never visited, for example—a situation that the behavioral timescale plasticity model discovered by Magee and team may not account for. Whichever model prevails, the new Science study provides another example of the constant flux in the brain sciences. A close look at the details of any given process assumed to underlie a long-standing theory can call into question the theory itself, and open up an entirely new avenue of research.

Gary Stix, Scientific American's neuroscience and psychology editor, commissions, edits and reports on emerging advances and technologies that have propelled brain science to the forefront of the biological sciences. Developments chronicled in dozens of cover stories, feature articles and news stories, document groundbreaking neuroimaging techniques that reveal what happens in the brain while you are immersed in thought; the arrival of brain implants that alleviate mood disorders like depression; lab-made brains; psychological resilience; meditation; the intricacies of sleep; the new era for psychedelic drugs and artificial intelligence and growing insights leading to an understanding of our conscious selves. Before taking over the neuroscience beat, Stix, as Scientific American's special projects editor, oversaw the magazine's annual single-topic special issues, conceiving of and producing issues on Einstein, Darwin, climate change, nanotechnology and the nature of time. The issue he edited on time won a National Magazine Award. Besides mind and brain coverage, Stix has edited or written cover stories on Wall Street quants, building the world's tallest building, Olympic training methods, molecular electronics, what makes us human and the things you should and should not eat. Stix started a monthly column, Working Knowledge, that gave the reader a peek at the design and function of common technologies, from polygraph machines to Velcro. It eventually became the magazine's Graphic Science column. He also initiated a column on patents and intellectual property and another on the genesis of the ingenious ideas underlying new technologies in fields like electronics and biotechnology. Stix is the author with his wife, Miriam Lacob, of a technology primer called Who Gives a Gigabyte: A Survival Guide to the Technologically Perplexed (John Wiley & Sons, 1999).

More by Gary Stix