Skip to main content

Digital Heads Help Eyewitnesses Identify Suspects

Witnesses were more accurate when they interacted with 3-D models than when they looked at still photographs. And the models were less expensive than an in-person lineup

Suspect lineup.

During some investigations, eyewitnesses try to select the perpetrator of a crime from a set of people or photographs—or, in a new study, of 3-D digital models of faces.

When a crime witness mistakenly identifies the wrong person, the error can destroy an innocent life and let the real perpetrator walk free. Eyewitnesses are often tasked with selecting the face they remember from a group of both potential suspects and “fillers.” This traditionally involves looking at a line of people standing behind one-way glass or at an array of photographs. But a new study suggests that interacting with digital, three-dimensional models—a set of virtual heads that can be manipulated with a computer mouse—could make eyewitness evidence more accurate.

“We’ve developed a new interactive lineup procedure that allows witnesses to rotate the faces into any position desired,” says Heather Flowe, a professor of forensic psychology at the University of Birmingham in England and a co-author of the paper, which was published in Scientific Reports. “There’s something about transferring control, letting the witness explore the faces in their own way, that helps aid memory.”

Eyewitness identification is widely used in prosecutions, says John DeCarlo, an associate professor of criminal justice at the University of New Haven, who was not involved in the new study. One reason for its popularity is its impact in court. “Someone gets on the stand, looks at someone in the courtroom, points to them and says, ‘That’s the person.’ That’s a very powerful indication to a jury,” DeCarlo explains, adding that this can make “eyewitness identification maybe [seem] more accurate than it actually is.”


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


One problem is that humans have trouble forming accurate memories during fraught situations. “We usually see crime happen once, very rapidly, under emotionally stressful and environmentally unfriendly conditions—which makes eyewitness identification maybe the worst form of identification,” DeCarlo says. “All eyewitness identification is prone to a relatively high error rate, or false positives.” This problem can lead to the wrong person being punished, and it can shut down an investigation without catching the guilty individual: according to the nonprofit Innocence Project, eyewitness misidentification has been involved in 69 percent of the cases in which people were wrongfully convicted and later exonerated through DNA evidence.

Researchers have been trying for decades to understand which techniques might improve eyewitness accuracy, but adjustments to lineup procedures have had only a slight impact. Flowe’s team thought digital technology could help. “Can we bring in some of the technological advances that have become relatively inexpensive—with people’s ability, now, to take high-quality images on simple camera phones—and even render them into 3-D objects using their phones?” she asks. The researchers developed their own low-cost software and shared it with other researchers for free. It transforms a video clip or several images of a face into a 3-D digital model, which is placed in an interactive lineup, where it can be manipulated with a mouse or (on a tablet) with a finger.

To test their digital lineup, the researchers recruited about 1,400 participating subjects through online crowdsourcing. These “witnesses” were shown a video clip of a nonviolent crime being committed and then spent up to a few minutes performing distracting tasks to take their minds off what they had just seen. Finally, they received either a set of photographs or a set of digital models and were asked to identify the “perpetrator.” Those who used the interactive lineup did much better at choosing the correct face.

“They’re between 18 and 22 percent more accurate,” Flowe says. “That is absolutely fantastic,” compared with many other attempts to improve suspect-identification procedures. Some previous approaches have boosted accuracy but made witnesses less confident in their choices. For example, if they were warned in advance that a lineup might be all “filler” and not contain the actual suspect, they became less likely to choose anyone at all. But that reduction in confidence did not happen in this study, Flowe says. She notes that the improvement in accuracy took place for both the eager witnesses who were more likely to guess and the conservative ones who only made a selection when they felt sure.

What made the moveable 3-D digital models so effective? “We think it’s through matching the pose in which people encoded or studied the perpetrator at the time of the crime—that then they remember that information, and they seek it out in the lineup so as to cue their memory for the face,” Flowe says. In another set of tests that were also described in the new study, subjects saw a lineup of still photographs with the heads either in the same position the perpetrator predominantly had in the video or in a different position. Witnesses were more accurate when the orientation matched that shown in the video. This, Flowe says, “makes it more likely that they’ll be able to correctly distinguish the guilty from the innocent.”

“The research that they did looked [like] it had a high amount of validity, and it had a big [sample size], so it was theoretically generalizable,” DeCarlo says. “I think that it will certainly ...  give people more to talk about and more to research.” He adds that reality differs a great deal from this kind of artificial situation, however. Watching a video online is vastly different from witnessing a crime in person. And performing a distracting task for a few minutes soon afterward is a pale imitation of waiting up to weeks for a police lineup. But DeCarlo says there is no way around this when testing various identification scenarios on a large group. “Most eyewitness research, including [this], does not necessarily mirror the real world,” he notes. “But does its best to model it.”

As a next step, DeCarlo suggests trying the software in the field. For Flowe and her co-author Melissa Colloff, an assistant professor of forensic psychology at the University of Birmingham, the priority is to continue experimenting and analyzing the data they have collected. But they are also keeping an eye out for real-world testing opportunities. “Let’s see if we can make some changes happen, particularly in those early-adopter jurisdictions in the U.S.,” Flowe says.

Another consideration is that some police departments lack the resources to use such technology. Flowe and Colloff work in the U.K., where departments routinely record video “mug shots” of potential suspects turning their head from side to side—an ideal basis for a digital model. But in the U.S., policy varies more among police departments: some may only have still images, making it difficult to produce a detailed dynamic model. Still, Colloff suggests, it would be relatively easy for departments to start collecting more information. “Video lineups have been implemented in the U.K. for quite some time now,” she says. “So it can be done, because they do it on a national scale here.”

DeCarlo is a little less certain, given the operational and resource constraints in some smaller U.S. departments. Nevertheless, as Flowe points out, the cost of such improvements to lineup technology would be much less than the cost of mistaken eyewitnesses. “We could afford it if it has the benefit of increasing the detection of guilty suspects and decreasing erroneous identifications that lead to these wrongful convictions,” she says. “And that costs on so many different levels—societally, economically and personally, of course.”

Sophie Bushwick is tech editor at Scientific American. She runs the daily technology news coverage for the website, writes about everything from artificial intelligence to jumping robots for both digital and print publication, records YouTube and TikTok videos and hosts the podcast Tech, Quickly. Bushwick also makes frequent appearances on radio shows such as Science Friday and television networks, including CBS, MSNBC and National Geographic. She has more than a decade of experience as a science journalist based in New York City and previously worked at outlets such as Popular Science,Discover and Gizmodo. Follow Bushwick on X (formerly Twitter) @sophiebushwick

More by Sophie Bushwick