Elad Schneidman’s Lab
Weizmann Institute of Science   

 


We are grateful for the support of:

European Research Council (ERC)
Human Frontier Science Program (HFSP)
Simons Foundation Autism Research Initiative
The Israeli Science Foundation
US-Israel Bi-national Science Foundation
The Minerva-Weizmann foundation
EU/FP7 EraSysBio+ program 
Center for positive neuroscience
Horowitz foundation, center for complexity science
The Peter and Patricia Gruber foundation
The Clore center for biological physics


We are interested in the design and function of networks of neurons and other biological networks, asking how they represent and process information, develop, learn, and make decisions. To answer these questions we combine theoretical work, modeling, analysis of experimental data, and behavioral experiments, and use tools from statistical physics, machine learning, information theory, and more.
We study the nature of the “neural code”, focusing on how large populations of neurons work collectively, the nature and implications of noise on neural function, animal swarming and collective decision making, learning and inference, neural adaptation, sensory information processing, and the functional architecture of other biological and non-biological networks. Examples of current projects:

READING THE CODE OF LARGE POPULATIONS OF NEURONS

While most of our understanding of the brain comes from single neuron studies, almost all  of what we regard as interesting brain function, comes from the joint activity of large groups of neurons. We therefore analyze and model the joint activity patterns of large populations of neurons encoding to naturalistic and artificial stimuli, and those directing decisions. Uncovering the functional architecture of neural networks and understanding the neural population code, requires appropriate mathematical tools, and we use maximum entropy models [Schneidman+al_03], or Ising models to model large networks, and characterize the underlying interactions between cells [Schneidman+al_06]. Extending these models for very large networks [Ganmor+al_11a], we have shown that a sparse high order interaction networks underlie the neural population code under natural stimuli [Ganmor+al_11b]. We are currently studying how these networks may change when neural systems adapt and learn, and how well we can reconstruct stimuli from population [Tkacik+al_13], and the effects of network noise on information encoding [Granot-Atedgi+al_13]. Theoretically, we study how neural networks may be designed to overcomes neural noise and optimize information coding and computation [Tkacik+al_10].

COLLECTIVE DECISION MAKING & LEARNING IN (ANIMAL) GROUPS


We study the effects of learning, communication, and memory on the efficiency and accuracy of collective behavior and decisions in (animal) groups. We simulate the behavior of groups looking for “food”, or escaping from a predator, using simple interaction models, where each individual combines its sensory information with the observed trajectories of its neighbors. We extended current models of interacting particles by adding simple learning ability to individuals, and ask what might be the individual and collective design principles of group behavior [Shklarsh+al_11]. Studying real animal groups we have found that mice living in a naturalistic environment show high-order interactions that go beyond pairwise based models [Shemesh+al_13]; we are currently studying collective behavior and learning in fish groups.


COMPUTATIONAL MODELS OF PATTERN LEARNING


Humans and other animals commonly learn from examples, and many psychophysical studies have explored what kind of rules or relations in patterns can be learned. To understand the individual traits of human learning, we asked how well we can capture the learning process itself at the individual level. We are using psychophysical learning tasks to characterize individual differences in human performance, and seek the “algorithms” that people use in learning such rules. We found that we can fit accurate models to the learning curves and specific responses of individuals and predict their answers, while also identifying the computational motifs and the priors that that subjects use. We then use the individual models we fit to each subject to improve their performance, and thus present personalized teaching [Cohen+Schneidman_13].

Research