Selected publications
-
(2022) Phys Rev X. 12, p. 021051 Abstract
The mapping of the wiring diagrams of neural circuits promises to allow us to link the structure and function of neural networks. Current approaches to analyzing such connectomes rely mainly on graph-theoretical tools, but these may downplay the complex nonlinear dynamics of single neurons and the way networks respond to their inputs. Here, we measure the functional similarity of simulated networks of neurons, by quantifying the similitude of their spiking patterns in response to the same stimuli. We find that common graph-theory metrics convey little information about the similarity of networks’ responses. Instead, we learn a functional metric between networks based on their synaptic differences and show that it accurately predicts the similarity of novel networks, for a wide range of stimuli. We then show that a sparse set of architectural features—the sum of synaptic inputs that each neuron receives and the sum of each neuron’s synaptic outputs—predicts the functional similarity of networks of up to 1000 neurons, with high accuracy. We thus suggest new architectural design principles that shape the function of neural networks. These architectural features conform with experimental evidence of homeostatic synaptic mechanisms.
-
(2021) Neuron. 109, p. 1-13 Abstract
Learning new rules and adopting novel behavioral policies is a prominent adaptive behavior of primates. Westudied the dynamics of single neurons in the dorsal anterior cingulate cortex and putamen of monkeys whilethey learned new classification tasks every few days over a fixed set of multi-cue patterns. Representing therules and the neuronal selectivity as vectors in the space spanned by a set of stimulus features allowed us tocharacterize neuronal dynamics in geometrical terms. We found that neurons in the cingulate cortex mainlyrotated toward the rule, implying a policy search, whereas neurons in the putamen showed a magnitude in-crease that followed the rotation of cortical neurons, implying strengthening of confidence for the newly ac-quired rule-based policy. Further, the neural representation at the end of a session predicted next-daybehavior, reflecting overnight retention. The novel framework for characterization of neural dynamics sug-gests complementing roles for the putamen and the anterior cingulate cortex.
-
(2020) Proceedings of the National Academy of Sciences . 117 , 40, p. 25066-25073 Abstract
We present a theory of neural circuits’ design and function, inspired by the random connectivity of real neural circuits and the mathematical power of random projections. Specifically, we introduce a family of statistical models for large neural population codes, a straightforward neural circuit architecture that would implement these models, and a biologically plausible learning rule for such circuits. The resulting neural architecture suggests a design principle for neural circuit—namely, that they learn to compute the mathematical surprise of their inputs, given past inputs, without an explicit teaching signal. We applied these models to recordings from large neural populations in monkeys’ visual and prefrontal cortices and show them to be highly accurate, efficient, and scalable.
-
(2020) eLife. 9, p. e56196 Abstract
The social interactions underlying group foraging and their benefits have been mostly studied using mechanistic models replicating qualitative features of group behavior, and focused on a single resource or a few clustered ones. Here, we tracked groups of freely foraging adult zebrafish with spatially dispersed food items and found that fish perform stereotypical maneuvers when consuming food, which attract neighboring fish. We then present a mathematical model, based on inferred functional interactions between fish, which accurately describes individual and group foraging of real fish. We show that these interactions allow fish to combine individual and social information to achieve near-optimal foraging efficiency and promote income equality within groups. We further show that the interactions that would maximize efficiency in these social foraging models depend on group size, but not on food distribution - suggesting that fish may adaptively pick the subgroup of neighbors they 'listen to' to determine their own behavior.
-
(2017) Proceedings of the National Academy of Sciences of the United States of America. 114, 22, p. 5589-5594 Abstract
Individual behavior, in biology, economics, and computer science, is often described in terms of balancing exploration and exploitation. Foraging has been a canonical setting for studying reward seeking and information gathering, from bacteria to humans, mostly focusing on individual behavior. Inspired by the gradient-climbing nature of chemotaxis, the infotaxis algorithm showed that locally maximizing the expected information gain leads to efficient and ethological individual foraging. In nature, as well as in theoretical settings, conspecifics can be a valuable source of information about the environment. Whereas the nature and role of interactions between animals have been studied extensively, the design principles of information processing in such groups are mostly unknown. We present an algorithm for group foraging, which we term "socialtaxis," that unifies infotaxis and social interactions, where each individual in the group simultaneously maximizes its own sensory information and a social information term. Surprisingly, we show that when individuals aim to increase their information diversity, efficient collective behavior emerges in groups of opportunistic agents, which is comparable to the optimal group behavior. Importantly, we show the high efficiency of biologically plausible socialtaxis settings, where agents share little or no information and rely on simple computations to infer information from the behavior of their conspecifics. Moreover, socialtaxis does not require parameter tuning and is highly robust to sensory and behavioral noise. We use socialtaxis to predict distinct optimal couplings in groups of selfish vs. altruistic agents, reflecting how it can be naturally extended to study social dynamics and collective computation in general settings.
-
(2015) eLife. 4, e06134. Abstract
Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns.
-
(2013) eLife. 2, Abstract
Social behavior in mammals is often studied in pairs under artificial conditions, yet groups may rely on more complicated social structures. Here, we use a novel system for tracking multiple animals in a rich environment to characterize the nature of group behavior and interactions, and show strongly correlated group behavior in mice. We have found that the minimal models that rely only on individual traits and pairwise correlations between animals are not enough to capture group behavior, but that models that include third-order interactions give a very accurate description of the group. These models allow us to infer social interaction maps for individual groups. Using this approach, we show that environmental complexity during adolescence affects the collective group behavior of adult mice, in particular altering the role of high-order structure. Our results provide new experimental and mathematical frameworks for studying group behavior and social interactions.
-
(2013) Proceedings of the National Academy of Sciences of the United States of America. 110, 2, p. 684-689 Abstract
Pattern classification learning tasks are commonly used to explore learning strategies in human subjects. The universal and individual traits of learning such tasks reflect our cognitive abilities and have been of interest both psychophysically and clinically. From a computational perspective, these tasks are hard, because the number of patterns and rules one could consider even in simple cases is exponentially large. Thus, when we learn to classify we must use simplifying assumptions and generalize. Studies of human behavior in probabilistic learning tasks have focused on rules in which pattern cues are independent, and also described individual behavior in terms of simple, single-cue, feature-based models. Here, we conducted psychophysical experiments in which people learned to classify binary sequences according to deterministic rules of different complexity, including high-order, multicue-dependent rules. We show that human performance on such tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of features captures individual learning behavior surprisingly well. These models reflect the important role of subjects' priors, and their reliance on high-order features even when learning a low-order rule. Further, we show that these models predict future individual answers to a high degree of accuracy. We then use these models to build personally optimized teaching sessions and boost learning.
-
(2011) Proceedings of the National Academy of Sciences of the United States of America. 108, 23, p. 9679-9684 Abstract
Information is carried in the brain by the joint activity patterns of large groups of neurons. Understanding the structure and function of population neural codes is challenging because of the exponential number of possible activity patterns and dependencies among neurons. We report here that for groups of similar to 100 retinal neurons responding to natural stimuli, pairwise-based models, which were highly accurate for small networks, are no longer sufficient. We show that because of the sparse nature of the neural code, the higher-order interactions can be easily learned using a novel model and that a very sparse low-order interaction network underlies the code of large populations of neurons. Additionally, we show that the interaction network is organized in a hierarchical and modular manner, which hints at scalability. Our results suggest that learnability may be a key feature of the neural code.
-
(2010) Proceedings of the National Academy of Sciences of the United States of America. 107, 32, p. 14419-14424 Abstract
In retina and in cortical slice the collective response of spiking neural populations is well described by "maximum-entropy" models in which only pairs of neurons interact. We asked, how should such interactions be organized to maximize the amount of information represented in population responses? To this end, we extended the linear-nonlinear-Poisson model of single neural response to include pairwise interactions, yielding a stimulus-dependent, pairwise maximum-entropy model. We found that as we varied the noise level in single neurons and the distribution of network inputs, the optimal pairwise interactions smoothly interpolated to achieve network functions that are usually regarded as discrete-stimulus decorrelation, error correction, and independent encoding. These functions reflected a trade-off between efficient consumption of finite neural bandwidth and the use of redundancy to mitigate noise. Spontaneous activity in the optimal network reflected stimulus-induced activity patterns, and single-neuron response variability overestimated network noise. Our analysis suggests that rather than having a single coding principle hardwired in their architecture, networks in the brain should adapt their function to changing noise and stimulus correlations.
-
(2006) Nature. 440, 7087, p. 1007-1012 Abstract
Biological networks have so many possible states that exhaustive sampling is impossible. Successful analysis thus depends on simplifying hypotheses, but experiments on many systems hint that complicated, higher-order interactions among large groups of elements have an important role. Here we show, in the vertebrate retina, that weak correlations between pairs of neurons coexist with strongly collective behaviour in the responses of ten or more neurons. We find that this collective behaviour is described quantitatively by models that capture the observed pairwise correlations but assume no higher-order interactions. These maximum entropy models are equivalent to Ising models, and predict that larger networks are completely dominated by correlation effects. This suggests that the neural code has associative or error-correcting properties, and we provide preliminary evidence for such behaviour. As a first test for the generality of these ideas, we show that similar results are obtained from networks of cultured cortical neurons.
-
(2003) Physical Review Letters. 91, 23, 238701. Abstract
Entropy and information provide natural measures of correlation among elements in a network. We construct here the information theoretic analog of connected correlation functions: irreducible N-point correlation is measured by a decrease in entropy for the joint distribution of N variables relative to the maximum entropy allowed by all the observed N-1 variable distributions. We calculate the "connected information" terms for several examples and show that it also enables the decomposition of the information that is carried by a population of elements about an outside source.
-
(2003) Journal of Neuroscience. 23, 37, p. 11539-11553 Abstract
A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We distinguish three kinds: ( 1) activity independence; ( 2) conditional independence; and ( 3) information independence. Each notion is related to an information measure: the information between cells, the information between cells given the stimulus, and the synergy of cells about the stimulus, respectively. We show that these measures form an interrelated framework for evaluating contributions of signal and noise correlations to the joint information conveyed about the stimulus and that at least two of the three measures must be calculated to characterize a population code. This framework is compared with others recently proposed in the literature. In addition, we distinguish questions about how information is encoded by a population of neurons from how that information can be decoded. Although information theory is natural and powerful for questions of encoding, it is not sufficient for characterizing the process of decoding. Decoding fundamentally requires an error measure that quantifies the importance of the deviations of estimated stimuli from actual stimuli. Because there is no a priori choice of error measure, questions about decoding cannot be put on the same level of generality as for encoding.
-
(1998) Neural Computation. 10, 7, p. 1679-1703 Abstract
The firing reliability and precision of an isopotential membrane patch consisting of a realistically large number of ion channels is investigated using a stochastic Hodgkin-Huxley (HH) model. In sharp contrast to the deterministic HH model, the biophysically inspired stochastic model reproduces qualitatively the different reliability and precision characteristics of spike firing in response to DC and fluctuating current input in neocortical neurons, as reported by Mainen & Sejnowski (1995). For DC inputs, spike timing is highly unreliable; the reliability and precision are significantly increased for fluctuating current input. This behavior is critically determined by the relatively small number of excitable channels that are opened near threshold for spike firing rather than by the total number of channels that exist in the membrane patch. Channel fluctuations, together with the inherent bistability in the HH equations, give rise to three additional experimentally observed phenomena: subthreshold oscillations in the membrane voltage for DC input, "spontaneous" spikes for subthreshold inputs, and "missing" spikes for suprathreshold inputs. We suggest that the noise inherent in the operation of ion channels enables neurons to act as "smart" encoders. Slowly varying, uncorrelated inputs are coded with low reliability and accuracy and, hence, the information about such inputs is encoded almost exclusively by the spike rate. On the other hand, correlated presynaptic activity produces sharp fluctuations in the input to the postsynaptic cell, which are then encoded with high reliability and accuracy. In this case, information about the input exists in the exact timing of the spikes. We conclude that channel stochasticity should be considered in realistic models of neurons.