The rules of recall
In remembering, our brains see the forest for the trees
When you first see a person, your brain extracts his or her visual features: face shape, eye color, and so on. Your brain then builds a holistic perception of the person’s face by integrating these little details. Scientists have long assumed that this process of perceptual integration worked in the same way during recall—bottom up, with small details building up to full picture.
But a new study conducted by Prof. Michail Tsodyks of the Department of Neurobiology at the Weizmann Institute of Science, together with Dr. Nian Qian, an experimental neuroscientist at Columbia University, suggests the opposite: the brain uses higher-level concepts (e.g., He was an elderly man) to reconstruct the details (the wrinkles on a man’s face or the gray of his hair)—filling the gaps from the top down.
The study, recently published in the Proceedings of the National Academy of Sciences, challenges established assumptions about the process of recall, offering a new framework for understanding perception where memory plays a crucial role.
“We tend to consider perception and memory as two separate cognitive processes, but they are interconnected, with one influencing the other,” says Prof. Tsodyks, a theoretical neuroscientist.
Prof. Tsodyks interest in the interconnectedness of perception and memory was piqued after seeing Dr. Qian’s recent experimental data on the stability and fallibility of visual memories—why is it that some memories are reliable, and others are false—and their implications for theories of perception.
As an illustration of perception, Prof. Tsodyks asks us to consider Alice, who is thinking about her friend Bob, whom she just met. She thinks her recall of Bob is instantaneous—but in reality, it is a process of steps, in which Alice’s brain integrates stored knowledge of Bob’s face (shape, color, etc.). In fact, the first thing she remembers isn’t the details, but the gist, such as “Bob was happy”. Alice may not be consciously aware of how she knew Bob was happy when she saw him (crinkled eye corners, upturned mouth), but this higher-level memory of his emotional state is inherently more stable—and therefore, what Alice’s brain recalls first.
By comparison, the brain’s ability to store the small details (size of the nose, etc.) is not as reliable—as stories of discounted eyewitness testimony have shown. Errors began to accumulate immediately after Alice looked away.
Prof. Tsodyks theorized that the brain would tend to rely more on stable memories than error-prone ones, and that higher-order feature representations would constrain lower-order features. As in, “Alice isn’t sure Bob was smiling, but she remembers that he was happy, so he couldn’t have been frowning.”
To evaluate this theory, Prof. Tsodyks and Dr. Qian and their teams tested participants’ memories of the exact angles of two lines displayed sequentially on a computer screen. The participants’ memories for the exact angles were typically imprecise; nevertheless, the researchers showed that participants were able to report the relationship between the two lines correctly. Thus, Prof. Tsodyks and Dr. Qian had clear evidence that the participants used a ‘reverse decoding’ method in recall: from higher features to lower features.
The researchers constructed a computational model of this cognitive process, using the higher-order information as the prior assumptions, rather than the lower-level information. The model’s predictions matched the human behavioral data.
In other words, our brains are wired to see the forest for the trees.
Prof. Tsodyks is funded by the Irving B. Harris Fund for New Directions in Brain Research. He is the incumbent of the Gerald and Hedy Oliven Professorial Chair in Brain Research.