All events, All years

Topographic mapping of a hierarchy of temporal receptive windows using natural stimuli

Lecture
Date:
Thursday, January 13, 2011
Hour: 12:00
Location:
Arthur and Rochelle Belfer Building for Biomedical Research
Prof. Uri Hasson
|
Dept of Psychology, Princeton University

Space and time are two fundamental properties of our physical and psychological realms. While much is known about the integration of information across space within the visual system, little is known about the integration of information over time. Using two complementary methods of functional magnetic resonance imaging (fMRI) and intracranial electroencephalography (iEEG), I will present evidences that the brain uses similar strategies for integrating information over space and over time. It is well established that neurons along visual cortical pathways have increasingly large spatial receptive fields. This is a basic organizing principle of the visual system: neurons in higher-level visual areas receive input from low level neurons with smaller receptive fields and thereby accumulate information over space. Drawing an analogy with the spatial receptive field (SRF), we defined the temporal receptive window (TRW) of a neuron as the length of time prior to a response during which sensory information may affect that response. As with SRFs, the topographical organization of the TRWs is distributed and hierarchical. The accumulation of information over time is distributed in the sense that each brain area has the capacity to accumulate information over time. The processing is hierarchical because the capacity of each TRW increases from early sensory areas to higher order perceptual and cognitive areas. Early sensory cortices such as the primary auditory or visual cortex have relatively short TRWs (up to hundreds of milliseconds), while the TRWs in higher order areas can accumulate information over many minutes.

Multimodal interactions in primary auditory cortex: Laminar dependence & modulation by general anesthetics

Lecture
Date:
Tuesday, January 11, 2011
Hour: 12:30
Location:
Jacob Ziskind Building
Prof. Matthew I. Banks
|
University of Wisconsin, USA

Current theories of the neural basis of sensory awareness suggest that neocortex is constantly comparing expected with observed sensory information. This comparison arises through the integration of ascending inputs from the sensory periphery and descending cortical inputs from the same or other sensory modalities. The importance of this integrative process for awareness is suggested by its selective loss upon anesthetic-induced hypnosis and during slow-wave sleep, but how this integration and its disruption by anesthetics occur within a cortical column is unclear. Using electrophysiological and imaging techniques in rodents in vivo and in brain slices, we show that extrastriate visual cortex provides descending input to primary auditory cortex that modulates responses to auditory stimuli, and that the integration of these information streams is disrupted by general anesthetics.

A Neural Mechanism for Reasoning and Believing

Lecture
Date:
Wednesday, January 5, 2011
Hour: 15:30
Location:
Arthur and Rochelle Belfer Building for Biomedical Research
Prof. Michael Shadlen
|
Physiology and Biophysics Dept University of Washington

Spatial Memory, Healthy Cognition and Successful Aging

Lecture
Date:
Wednesday, January 5, 2011
Hour: 12:00
Location:
Nella and Leon Benoziyo Building for Brain Research
Prof. Veronique Bohbot
|
Faculty of Medicine McGill University, Quebec, Canada

Young healthy participants spontaneously use different strategies in a virtual radial maze, an adaptation of a task typically used with rodents. We have previously shown using fMRI that people who use spatial memory strategies have increased activity in the hippocampus whereas response strategies are associated with activity in the caudate nucleus. In addition, we used Voxel Based Morphometry (VBM) to identify brain regions co-varying with the navigational strategies individuals used. Results showed that spatial learners have significantly more grey matter in the hippocampus and less grey matter in the caudate nucleus than response learners. The relationship between spatial memory strategies and grey matter of the hippocampus was replicated with healthy older adults. Furthermore, we found a positive correlation between spatial memory strategies and the MoCA, which is a test sensitive to mild cognitive impairment. Since low grey matter in the hippocampus is a risk factor for cognitive deficits in normal aging and for Alzheimer’s disease, our results suggest that using spatial memory in our everyday lives may protect against degeneration of the hippocampus and associated cognitive deficits These results have important implications for intervention programs aimed at healthy and successful aging.

Visual Inference by Composition

Lecture
Date:
Tuesday, January 4, 2011
Hour: 12:30
Location:
Jacob Ziskind Building
Prof. Michal Irani
|
Dept of Computer Science and Applied Mathematics, WIS

In this talk I will show how complex visual tasks can be performed by exploiting redundancy in visual data. Comparing and integrating data recurrences allows to make inferences about complex scenes, without any prior examples or prior training. I will demonstrate the power of this approach to several visual inference problems (as time permits). These include: 1. Detecting complex objects and actions (often based only on a rough hand-sketch of what we are looking for). 2. Summarizing visual data (images and video). 3. Super-resolution (from a single image). 4. Prediction of missing visual information. 5. Detecting the "irregular" and "unexpected". 6. "Segmentation by Composition".

The Optimism Bias: A tour of the positively irrational brain

Lecture
Date:
Thursday, December 30, 2010
Hour: 15:00
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Tali Sharot
|
University College London

New Developments in the Genetics of Eating Disorders

Lecture
Date:
Wednesday, December 29, 2010
Hour: 15:00
Location:
Nella and Leon Benoziyo Building for Brain Research
Allan Kaplan
|
Professor of Psychiatry, University of Toronto

The eating disorders anorexia nervosa (AN) and bulimia nervosa (BN) are serious psychiatric disorders characterized by disturbed eating behavior and characteristic psychopathology, and in the case of AN, very low weight. The mortality of AN is the highest of any psychiatric disorder. The etiology of AN and BN are multidetermined; there are factors biologically, psychologically and socioculturallly that predispose an individual to an eating disorder. Biologically, genes contribute significantly to the risk for eating disorders. Studies have shown that the risk of anorexia nervosa in first degree relatives if one parent has AN is between 8-10%.compraed to the general population risk of 1%. The concordance rate for MZ twins in AN is close to 70%. Approximately 70% of the variance in AN is attributable to genetic effects whereas about 30% is attributable to unique environmental effects. For BN, approximately 60% of the variance in BN is attributable to genetic effects whereas about 40% is attributable to unique environmental effects. Eating disorders do not map on to one chromosome Instead there are dimensions that are genetic, such as risk of obesity, anxiety, and temperament such as perfectionism and obssessionality that are inherited and place an individual at risk for an eating disorder Gender is also a genetic risk factor for an eating disorder. Being female is a risk factor for an eating disorder, not just because females are more sensitive to cultural pressure than males. Females are more commonly affected by eating disorders because female brains are much more sensitive to dietary manipulation than male brains related to the effects of estrogen and progesterone on serotonin metabolism. Tryptophan depletion does not significantly affect levels of brain serotonin in males but dramatically reduces levels of serotonin produced in females’ brains. Dieting, especially restricting carbohydrates lowers the level of blood tryptophan available to cross the blood brain and be available to be synthesized to serotonin. Patients are at risk for an eating disorder will reduce the levels of serotonin produced in their brains by dieting and restricting carbohydrates, leading to changes in satiety and mood and increasing the likelihood of an eating disorder developing . There are those who believe that binge eating develops in response to a hyposerotonergic state in an attempt to restore tryptophan available for brain serotonin synthesis I have been involved in several large multi site genetic studies of eating disorders over the past 15 years. In a linkage analysis of affected relative AN pairs, when only restricting anorexics were included in the analyses, a significant signal was found on the long arm of chromosome 1. Candidate genes that have been found in that area of chromosome 1 include the serotonin 1D receptor gene, the opiod delta gene, and the dopamine D2 receptor gene. In a linkage analyses on a sample of affected relative pairs with BN, a significant signal was found on chromosome 10 when the sample included only subjects who vomited. I am currently involved in a whole genome wide association study ( GWAS) of 4000 AN cases and 4000 female controls which will hopefully elucidate which specific genes contribute to the risk for AN. Future genetic studies we are involved in will focus on why patients with AN are able to drop their weight to dangerously low levels, whereas patients with bulimia nervosa (BN) with similar psychopathology and dysfunctional eating behaviors are protected from extreme weight loss and do not develop AN. So far, research on genes that are important for appetite and weight regulation, such as the leptin receptor (LEPR), ghrelin (GHRL), melanocortin 4 receptor (MC4R), and brain derived neurotrophic factor (BDNF), has yielded conflicting findings in AN and BN, while related genes with potential in the same genetic systems have not been sufficiently studied. Considering that AN in adults tends to follow a chronic course and currently does not have any evidence-based treatments, determining the role of genetic factors in the vulnerability to achieve low weight in AN patients could be an important first step toward improved treatment.

On Informational Principles of Embodied Cognition

Lecture
Date:
Wednesday, December 29, 2010
Hour: 12:00
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Daniel Polani
|
School of Computer Science, University of Hertfordshire, UK

For many decades, Artificial Intelligence adopted a platonic view that intelligent behaviour is produced in the "brain" only and any body is only an incidental translator between thought and action. In the last two decades, in view of the successes of the subsumption architecture and embodied robotics, this perspective has changed to acknowledge the central importance of the body and the perception-action loop as whole in helping an organisms' brain to carry out useful ("intelligent") behaviours. A central keyword for this phenomenon is, of course, "environmental/morphological computation" (Paul 2006; Pfeifer and Bongard 2007). The question arises, how/why exactly does this work? What are the principles that make environmental computation work so successfully and how can the contribution that the body provides to cognition be characterized objectively? In the last years, Information Theory has been identified as providing a natural language to characterize cognitive processing, cognitive invariants as well as the contribution of the embodiment to the cognitive process. The talk will review some highlights of the current state-of-the-art in the field and provide some - sometimes quite surprising - illustrations of the power of the informational view of cognition.

Visual Inference Amid Fixational Eye Movements

Lecture
Date:
Tuesday, December 28, 2010
Hour: 12:30
Location:
Jacob Ziskind Building
Dr. Yoram Burak
|
Center for Brain Science, Harvard University

Our visual system is capable of inferring the structure of 2-d images at a resolution comparable (or, in some tasks, greatly exceeding) the receptive field size of individual retinal ganglion cells (RGCs). Our capability to do so becomes all the more surprising once we consider that, while performing such tasks, the image projected on the retina is in constant jitter due to eye and head motion. For example, the motion between two subsequent discharges of a foveal RGC typically exceeds the receptive field size, so the two subsequent spikes report on different regions of the visual scene. This suggests that, to achieve high-acuity perception, the brain must take the image jitter into account. I will discuss two theoretical investigations of this theme. I will first ask how the visual system might infer the structure of images drawn from a large, relatively unconstrained ensemble. Due to the combinatorially large number of possible images, it is impossible for the brain to act as an ideal observer that performs optimal Bayesian inference based on the retinal spikes. However, I will propose an approximate scheme derived from such an approach, which is based on a factorial representation of the multi-dimensional probability distribution, similar to a mean-field approximation. The decoding scheme that emerges from this approximation suggests a neural implementation that involves two neural populations, one that represents an estimate for the position of the eye, and another that represents an estimate of the stabilized image. I will discuss the performance of this decoding strategy under simplified assumptions on retinal coding. I will also compare it to other schemes, and discuss possible implications for neural visual processing in the foveal region. In the second part of the talk I will focus on the Vernier task, in which human subjects achieve hyper-acuity, greatly exceeding the receptive field size of a single RGC. The optimal decoder for this task can be formalized and analyzed mathematically in detail. I will show that a linear, perceptron-type decoder cannot achieve hyper-acuity. On the other hand a quadratic decoder, which is sensitive to coincident spiking in pairs of neurons, constitutes an effective and structurally simple solution to the problem. Furthermore, the performance achieved by such a decoder is close to the limit imposed by the ideal Bayesian decoder. Therefore, spike coincidence detectors in the early visual system may facilitate hyper-acuity vision in the presence of fixational eye-motion.

Connectivity and activity of C. elegans locomotion

Lecture
Date:
Monday, December 27, 2010
Hour: 15:00
Location:
Nella and Leon Benoziyo Building for Brain Research
Dr. Gal Haspel
|
National Institute of Neurological Disorders and Stroke, NIH

I study the neuronal basis of locomotion in the nematode C elegans. With only 302 neurons in its nervous system, 75 of which are locomotion motorneurons, C. elegans offers a tractable network to study locomotion. In this talk I will describe my research, which uses a neuroethological approach to study both the behavior and the underlying connectivity and activity of neurons and muscle cells.

Pages

All events, All years

Optimal adaptation of retinal processing to color contrasts

Lecture
Date:
Tuesday, June 15, 2010
Hour: 12:30
Location:
Jacob Ziskind Building
Dr. Ronen Segev
|
Life Sciences Dept Ben Gurion University of the Negev

The visual system continually adjusts its sensitivity to properties of the environment. This adaptation process starts in the retina, which encodes over 8 orders of magnitude of light intensity using a limited range of spiking outputs of the ganglion cell, the only cells to project axons to the brain, extending between zero to several hundreds spikes per second. While the different spectral sensitivities of photoreceptors give the first separation of color channels in the visual system, chromatic adaptation observed in psychophysical experiments is commonly thought to originate from high visual areas. We show that color contrast adaptation actually starts in the retina by ganglion cells adjusting their responses to spectral properties of the environment. Specifically, we demonstrate that the ganglion cells match their response to red-blue stimulus combinations according to the relative contrast of each of the input channels. Using natural scene statistics analysis and theoretical consideration, we show that the retina balances inputs from the two color channels optimally given the strong correlation between the long and short wavelengths in the natural environment. These results indicate that some of the sophisticated processing of spectral visual information attributed to higher visual processing areas can be actually performed by the retina.

Contrast Tuning in Face Cells

Lecture
Date:
Sunday, June 13, 2010
Hour: 12:30
Location:
Nella and Leon Benoziyo Building for Brain Research
Shay Ohayon
|
Graduate Student, Computation and Neural Systems, CALTECH

Several state-of-the-art computer vision systems for face detection, e.g., Viola-Jones [1], rely on region-based features that compute contrast by adding and subtracting average image intensity within different regions of the face. This is a powerful strategy due to the invariance of these features across changes in illumination (as proposed by Sinha [2]). The computational mechanisms underlying face detection in biological systems, however, remain unclear. We set to investigate the role of region-based features in the macaque middle face patch, an area that consists of face-selective neurons. We found that individual neurons were tuned to subsets of contrast relationships between pairs of face regions. The sign of tuning for these relationships was strikingly consistent across the population (for example, almost all neurons preferred a lower average intensity in the eye region relative to the nose region). Furthermore, the pairs and polarity of tuning were fully consistent with Sinha’s proposed ratio-template model of face detection [2]. Non-face images from the CBCL dataset that contained correct contrast polarities in pre-defined regions (facial parts) did not elicit increased firing in face-selective neurons, suggesting that the neurons are not only computing averaged intensity according to a fixed template, but are also sensitive to the specific shape of features within a region. [1] Robust Real-time Object Detection, Paul Viola and Michael Jones. Second International Workshop on Statistical and Computational Theories of Vision – Modeling, Learning, Computing, and Sampling. Vancouver, Canada, July, 2001. [2] Qualitative Representations for Recognition, Pawan Sinha. Proceedings of the Second International Workshop on Biologically Motivated Computer Vision, Tubingen, November, 2002.

Sensory Coding and Decoding for Smooth Pursuit Eye Movements

Lecture
Date:
Thursday, June 10, 2010
Hour: 18:00
Location:
Nella and Leon Benoziyo Building for Brain Research
Prof. Stephen Lisberger
|
Dept of Physiology University of California San Francisco

Featured Review: Visual Guidance of Smooth-Pursuit Eye Movements: Sensation, Action, and What Happens in Between S.G. Lisberger Smooth pursuit eye movements transform visual motion into a rapid initiation of eye movement and sustained accurate tracking. The pursuit response is encoded in distinct responses of neural circuits for visual motion in area MT, implemented in the cerebellum and the smooth eye movement region of the frontal eye fields and controlled by volition on a rapid time scale. Lisberger reviews the features that make pursuit a model system for studying the general principles of sensory-motor processing in brain. http://www.cell.com/neuron/abstract/S0896-6273%2810%2900198-4

Generalizing Learned Movement Skills from Infancy to Maturity

Lecture
Date:
Tuesday, June 8, 2010
Hour: 12:30
Location:
Jacob Ziskind Building
Dr. Eilat Almagor
|
A Feldenkrais Trainer The Rubin Academy of Music and Dance, Jerusalem

During the first year of life, babies learn skills of movement which serve them not only for their present stage, but are building blocks for future stages. There are special qualities of the learning process in early development stages, which allow the learned experiences to be generalized in later stages. For example the skills that are learned in horizontal locomotion (crawling) are also applied in walking. This learning process is playful and rich with mistakes It is complex in the sense that at each moment there is an overlap of a few functions. For example, keeping the balance while lifting a toy.By observing video clips of a few babies playing, we will see some of the necessary qualities of the learning process. We will also see movement lessons given to disabled children, providing them with the normal ingredients of the learning process, in spite of their disabilities.

Neuronal deficits in mouse models of Alzheimer's disease: structure, function, and plasticity

Lecture
Date:
Tuesday, June 1, 2010
Hour: 12:30
Location:
Jacob Ziskind Building
Dr. Edward Stern
|
Brain Research Center, Bar-Ilan University, Associate in Neurobiology, Massachusetts General Hospital, Assistant Professor of Neurology, Harvard Medical School

In the 104 years since Alois Alzheimer first described the neuropathological features underlying dementia in the disease that now bears his name, the changes in neuronal activity underlying the symptoms of the disease are still not understood. Using transgenic mouse models, it is now possible to directly measure changes in neuronal structure and function resulting from the accumulation of AD neuropathology. We measured the changes in evoked responses to electrical and sensory stimulation of neocortical neurons in mice transgenic for human APP, in which soluble amyloid-β accumulates and insoluble plaques aggregate in an age-dependent manner. Our results reveal a specific synaptic deficit present in neocortical neurons in brains with a significant amount of plaque aggregation. We show that this deficit is related to the distortion of neuronal process geometry by plaques, and the degree of response distortion is directly related to the amount of plaque-burdened tissue traversed by the afferent neuronal processes, indicating that the precise connectivity of the neocortex is essential for normal information processing. Furthermore, we show that the physical distortion of neuronal processes by plaques is reversible by immunotherapy, revealing a larger degree of structural plasticity in neocortical neurons of aged animals. Taken together, these results indicate that it may be possible to slow or reverse the symptoms of AD.

Neuronal Response Clamp

Lecture
Date:
Sunday, May 30, 2010
Hour: 14:30
Location:
Nella and Leon Benoziyo Building for Brain Research
Avner Wallach
|
Network Biology Research Laboratories, Technion Guest Student, Ahissar Group, Dept of Neurobiology, WIS

Since the first recordings made of evoked action potentials it has become apparent that the responses of individual neurons to ongoing physiologically relevant input, are highly variable. This variability is manifested in non-stationary behavior of practically every observable neuronal response feature. We introduce the Neuronal Response Clamp, a closed-loop technique enabling full control over two important single neuron activity variables: response probability and stimulus-spike latency. The technique is applicable over extended durations (up to several hours), and is effective even on the background of ongoing neuronal network activity. The Response Clamp technique is a powerful tool, extending the voltage-clamp and dynamic-clamp approaches to the neuron's functional level, namely-its spiking behavior.

The Hippocampus in Space and Time

Lecture
Date:
Thursday, May 27, 2010
Hour: 14:30
Location:
Arthur and Rochelle Belfer Building for Biomedical Research
Prof. Howard Eichenbaum
|
Center for Memory and Brain Boston University

In humans, hippocampal function is generally recognized as supporting episodic memory, whereas in rats, many believe that the hippocampus creates maps of the environment and supports spatial navigation. Is this a species difference, or is there a fundamental function of the hippocampus that supports cognition across species? Here I will discuss evidence that hippocampal neuronal activity in spatial memory is more related to the representation of routes than the maps, suggesting a potential function of the hippocampus in memory for unique sequences of events. Further studies support this view by showing that the hippocampus is critical to memory for sequential events in non-spatial episodic memories. Correspondingly, neural ensemble activity in the hippocampus involves a gradually changing temporal context representation onto which specific events might be coded. Finally, at the level of single-neuron spiking patterns, hippocampal principal cells encode specific times within spatial and non-spatial sequences (“time cells”, as contrasted with “place cells”), and encode specific events within sequence memories onto the representation of time. These findings support an emerging view that the hippocampus creates “scaffolding” for memories, representing events in their spatial and temporal context.

Associative Cortex in the First Olfactory Brain Relay Station?

Lecture
Date:
Thursday, May 13, 2010
Hour: 13:30
Location:
Arthur and Rochelle Belfer Building for Biomedical Research
Prof. Diego Restrepo
|
Director, Neuroscience Program Department of Cell and Developmental Biology University of Colorado, Denver, CO

Synchronized firing of mitral cells in the olfactory bulb, the first relay station of the olfactory system, has been hypothesized to convey information to olfactory cortex. In this first survey of synchronized firing by mitral cells in awake behaving vertebrates, we find sparse divergent odor responses. Surprisingly, synchronized firing conveys information on odor value (is it rewarded?) rather than odor quality. Further, adrenergic block decreases the magnitude of odor divergence of synchronous firing. These data raise questions whether mitral cells contribute to decision-making, or convey expected outcomes used in prediction error calculation.

Sculpting the hippocampal cognitive map: experimental control over the coded parameter space

Lecture
Date:
Tuesday, May 11, 2010
Hour: 12:30
Location:
Jacob Ziskind Building
Dr. Genela Morris
|
Dept of Neurobiology and Ethology University of Haifa

Although much work in the field of reinforcement learning has been devoted to understanding how animals and humans learn to perform the best action in each state of affairs, strikingly scant work targets the question of what constitutes such a state. In initial phases of learning, an animal or a person cannot know which facets of its rich experience should be attended to in order to identify their ‘state’. In a number of projects, we use tasks in which several different attributes can potentially be important for procuring rewards (odors, spatial location, previous actions), and specifically investigate the behavioral and neural processes underlying learning of which is the relevant state. This talk will focus on parameter coding by hippocampal primary neurons. The hippocampus serves an important role in learning and memory. In humans, it is associated with declarative episodic memory. Single unit recordings of hippocampal neurons in freely behaving rats have shown that many of them act as place-cells, confining their firing to well-defined locations in space. We recorded the activity of hippocampal primary neurons in a specially devised olfactory space, in which rats foraged for reward based solely on olfactory cues and studied the dependence of the activity of these neurons on their availability. We show that place cells shifted their firing fields from room coordinates to olfactory coordinates as animals learned to rely on them in order to obtain reward. The use of olfactory cues provides the additional benefit of careful control over the sensory inputs provided to the animals. Classical studies on hippocampal place-cells show that when the environment is visually altered, these hippocampal neurons 'remap', in a seemingly random manner. Although studies have been conducted to investigate the contribution of various visual aspects to the activity of place cells, the exact correlation of hippocampal cell firing to the visual input to the rats cannot be studied in freely behaving rats, because their field of view is unknown. By repeating the sequence of olfactory stimuli provided in the maze in a new environment, we study the relation between the neuronal responses of single neurons to given sensory stimuli in distinct spatial contexts. Preliminary results suggesting that the mapping of hippocampal neurons is not random, but critically depends on the sequence in which the different items are encountered, in support of the relational representation theory of hippocampal function.

Memory encoding and retrieval:A hippocampal “place-field centric” perspective

Lecture
Date:
Monday, May 10, 2010
Hour: 12:30
Location:
Arthur and Rochelle Belfer Building for Biomedical Research
Prof. Etan Markus
|
Dept of Psychology University of Connecticut

As a rat runs through a familiar environment, the hippocampus retrieves a previously stored spatial representation of the environment. When the environment is modified a new representation is seen, presumably corresponding to the hippocampus encoding the new information. I will present single unit data and discuss how the “hippocampus decides” whether to retrieve an old representation or form a new representation.

Pages

All events, All years

There are no events to display

All events, All years

There are no events to display

Pages