Skip to main content

Quick-Thinking AI Camera Mimics the Human Brain

The device will use artificial neurons and synapses to improve self-driving vehicle and drone performance 

A Google self driving SUV on the streets of Mountain View, California.

Researchers in Europe are developing a camera that will literally have a mind of its own, with brainlike algorithms that process images and light sensors that mimic the human retina. Its makers hope it will prove that artificial intelligence—which today requires large, sophisticated computers—can soon be packed into small consumer electronics. But as much as an AI camera would make a nifty smartphone feature, the technology’s biggest impact may actually be speeding up the way self-driving cars and autonomous flying drones sense and react to their surroundings.

The conventional digital cameras used in self-driving and computer-assisted cars and drones as well as in surveillance devices capture a lot of extraneous information that eats up precious memory space and battery life. Much of that data is repetitive because the scene the camera is watching does not change much from frame to frame. The new AI camera, called an ultralow-power event-based camera, or ULPEC, will have pixel sensors that come to life only when the camera is ready to record a new image or event. That memory- and power-saving feature will not slow performance—the camera will also have new electrical components that allow it to react to changing light or movement in a scene within microseconds (millionths of a second), compared with milliseconds (thousandths) in today’s digital cameras, says Ryad Benosman, a professor at the University Pierre and Marie Curie who leads the Vision and Natural Computation group at the Paris-based Vision Institute. “It records only when the light striking the pixel sensors crosses a preset threshold amount,” says Benosman, whose team is developing the learning algorithms for an artificial neural network that serves as the camera’s brain. An artificial neural network is a group of interconnected computers configured to work like a system of flesh-and-blood neurons in the human brain. The interconnections among the computers enable the network to find patterns in data fed into the system, and to filter out extraneous information via a process called machine learning. Such a network “does away with not only acquiring but also processing irrelevant information, thus making the camera faster and requiring lower power for computation,” Benosman says.

The AI camera's photo sensors—its “eyes”—will consist of tiny pieces of semiconductors and circuitry on silicon, which turn changes in light into electrical signals sent to the neural network. Integrated circuits and a new type of electronic component called a memory resistor or “memristor”, acting as the equivalent of synaptic connections, will process the information in those signals, says Sören Boyn, a researcher at the Zurich-based Swiss Federal Institute of Technology who worked with the CNRS-Thales joint research unit that is now working with Benosman’s team. One of the biggest challenges to that approach is that memristor technology—first theorized in 1971 by University of California, Berkeley, professor emeritus Leon Chua (pdf) and later mathematically modeled by Hewlett–Packard Labs researchers in 2008—is still largely in the development stage, which would explain why for the ULPEC project is not expected to have a working device until 2020.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The AI camera’s memristors will consist of a thin layer of a ferroelectric material—bismuth ferrite—sandwiched between two electrodes, says Vincent Garcia, a research scientist at French scientific research agency CNRS/Thales, which is developing the ULPEC memristor. Ferroelectric materials have positive and negative sides—but applying voltage reverses those charges. Thus, “the resistance of memristors can be tuned using voltage,” Garcia explains. “Similar to our brain’s learning ability that is dependent on the stimulation of synapses, which serve as connections between our neurons, this tunable resistance helps in making the network learn.” The more the synapse is stimulated, the more the connection is reinforced and learning improved.

The combination of bio-inspired optical sensors and neural networks will make the camera an especially good fit for self-driving cars and autonomous drones, says Christoph Posch, chief technology officer of the Paris-based start-up Chronocam, which is designing the camera’s optical sensors. “In self-driving cars the onboard computer must react to changes very quickly while navigating through traffic or determining the movement of pedestrians,” Posch explains. “The ULPEC can detect and process these changes rapidly.” German automotive equipment manufacturer Bosch—also involved in the project—will investigate how the camera might be used as part of its autonomous and computer-aided driving technology.

The researchers plan to place 20,000 memristors on the AI camera’s microchip, says Sylvain Saighi, an associate professor of electronics at the University of Bordeaux and head of the $5.57-million ULPEC project.

Getting all of the components of a memristor neural network onto a single microchip would be a big step, says Yoeri van de Burgt, an assistant professor of microsystems at Eindhoven University of Technology in the Netherlands, whose research includes building artificial synapses. “Since it is performing the computation locally, it will be more secure and can be dedicated for specific tasks like cameras in drones and self-driving cars,” adds van de Burgt, who was not involved in the ULPEC project.

Assuming the researchers can pull it off, such a chip would be useful well beyond smart cameras because it would be able to perform a variety of complicated computations itself, rather than off-loading that work to a supercomputer via the cloud. In this way, Posch says, the camera is an important step toward determining whether the underlying memristors and other technology will work, and how they might be integrated into future consumer devices. The camera, with its innovative sensors and memristor neural network, could demonstrate that AI can be built into a device in order to make it both smart and more energy efficient.