From pixels to people

AI shakes hands with the human brain

hebrew

Briefs

Date: November 19, 2019
Source: 
Weizmann homepage

Artificial intelligence systems that gather visual cues from the environment, and learn from them, can recognize human faces more accurately than we can. But how do such systems make the leap from pixels to people?

Weizmann Institute neuroscientists have now revealed part of the secret: the most advanced AI vision systems evolve as they learn, spontaneously creating connections that bear a surprising resemblance to how neural networks function in the human brain.

The research, published in Nature Communications, was performed by Prof. Rafi Malach of the Department of Neurobiology, together with Shany Grossman, a graduate student in the Malach lab.

Today’s most advanced systems for artificial vision are based on an AI approach called deep convolutional neural networks (DCNNs). A form of machine learning, DCNNs are bio-inspired programs that enable computers to learn independently from observational data. This process is somewhat of a “black box” and scientists still don’t fully understand how it works. But the underlying design of DCNNs gives such networks an obvious advantage in the world of computer vision, partially because they are inspired by the architecture of the human brain.

Prof. Malach and Ms. Grossman—in collaboration with Guy Gaziv, a PhD candidate from the laboratory of Prof. Michal Irani in the Department of Computer Science and Applied Mathematics—compared DCNNs and the human brain, examining how images of faces elicit specific signaling patterns. In humans, such signals are electrical pulses that travel between neurons in the brain; in artificial networks, they are electrical pulses that pass between circuits.

Data for the brain-based studies came from the lab of Prof. Malach’s colleague from the Feinstein Institute for Medical Research in Manahasset, New York, Prof. Ashesh Mehta. In the Mehta lab, 33 epilepsy patients who had electrodes implanted in different areas of their brains in the course of a diagnostic procedure volunteered to be monitored while viewing a series of images of faces—celebrities as well as unfamiliar individuals.

What humans and machines have in common

Recordings from the electrodes revealed that each image elicited a unique “signature” of neuronal activity, with specific groups of neurons active at various intensities. Interestingly, some pairs of faces elicited similar-looking brain activity signatures while other pairs elicited activation patterns that differed greatly from one another.

Reasoning that these neural signatures were critical for allowing face recognition, Ms. Grossman set out to determine whether similar signatures appeared in the DCNN. To test this hypothesis, she presented the AI network with the same series of images shown to the human volunteers. The results were striking: the connections generated in the network’s circuits bore a strong structural resemblance to the patterns generated in the human brain.

So what does this mean for science? Just like early airplane designers that had to incorporate bird-like wings in their designs in order to make their machines fly—this study demonstrates the common features that enable both biological and artificial intelligence networks achieve their remarkable recognition abilities. By identifying machines’ and humans’ similar approaches to visual face perception, Prof. Malach and his team have set out on an intriguing new path, one that is capable of bringing both neuroscience and AI to greater heights.

Prof. Rafi Malach’s research is supported by the Barbara and Morris L. Levinson Professorial Chair in Brain Research; the Dr. Lou Siminovitch Laboratory for Research in Neurobiology; and the estate of Florence and Charles Cuevas.

Prof. Rafi Malach

Prof. Rafi Malach