All events, 2010

Changing Human Fear:Brain Mechanisms Underlying Emotional Control and Flexibility

Lecture
Date:
Tuesday, January 12, 2010
Hour: 12:30
Location:
Jacob Ziskind Building
Dr. Daniela Schiller
|
New York University

Learned fear is a process allowing quick detection of associations between cues in the environment and prediction of imminent threat ahead of time. Adaptive function in a changing environment, however, requires organisms to quickly update this learned information and have the ability to hinder fear responses when predictions are no longer correct. Research on changing fear has highlighted several techniques, most of which rely on the inhibition of the learned fear response. An inherent problem with these inhibition techniques is that the fear commonly returns, for example with stress or even just with the passage of time. I will present research that examines new ways to flexibly control fear and the underlying brain mechanisms. I will describe a brain system mediating various strategies to modulate fear, and present findings suggesting a novel non-invasive technique that could be potentially used to permanently block or even erase fear memories.

Novel optogenetic tools for understanding emergent patterns in neural circuits

Lecture
Date:
Tuesday, January 5, 2010
Hour: 12:30
Location:
Jacob Ziskind Building
Dr. Ofer Yizhar
|
Stanford University, CA

Gamma oscillations are fast (30-80 Hz) rhythmic patterns of neural activity that have been proposed to support information processing in the brain. Gamma rhythms are altered in diseases such as schizophrenia and autism and are therefore of both basic and clinical interest. I have been developing optogenetic tools for light-based control over the activity of genetically defined neuronal populations. A new set of such tools, step function opsins (SFOs), are optimized for modulating the activity of neural circuits and ideal for observing emergent network properties. I will present the molecular engineering approach we used for developing these opsins and show new data on application of these tools to study the mechanisms underlying gamma oscillations in the prefrontal cortex. Some technological aspects will be discussed, with emphasis on the array of available optogenetic tools and how they might be improved to further extend the range of experiments feasible with these new techniques.

Theoretical models of grid cell dynamics and coding in the rat entorhinal cortex

Lecture
Date:
Monday, January 4, 2010
Hour: 11:00
Location:
Nella and Leon Benoziyo Building for Brain Research
Dr. Yoram Burak
|
Center for Brain Science Harvard University

Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space, and have been hypothesized to form the neural substrate for dead-reckoning. I will address two theoretical questions that arise from this remarkable experimental discovery: First, how is grid cell dynamics generated in the brain. Second, what information is conveyed in grid cell activity. In discussing the first question, I will focus on continuous-attractor models of grid cell activity, and ask whether such models can generate regular triangular grid responses based on inputs that encode only the rat's velocity and heading direction. In a recent work, we provided a proof of concept that such models can accurately integrate velocity inputs, along trajectories spanning 10-100 meters in length and lasting 1-10 minutes. The range of accurate integration depends on various properties of the continous-attractor network. After presenting these results, I will discuss possible experiments that may differentiate the continuous-attractor model from other proposed models, where activity arises independently in each cell. In the second part of the talk, I will examine the relationship between grid cell firing and rat location, asking what information is present in grid-cell activity about the rat's position. I will argue that, although the periodic response of grid cells may appear wasteful, the grid-cell code is in fact combinatorial in capacity, and allows for unambiguous position representations over ranges vastly larger than the ~0.5-10m periods of individual lattices. Further, the grid cell representation has properties that could facilitate the arithmetic computation involved in position updating during path integration. I will conclude by mentioning some of the implications for downstream readouts, and possible experimental tests.

Sound Texture Perception via Synthesis

Lecture
Date:
Sunday, January 3, 2010
Hour: 14:30
Location:
Nella and Leon Benoziyo Building for Brain Research
Dr. Josh McDermott
|
New York University

Many natural sounds, such as those produced by rainstorms, fires, and swarms of insects, result from large numbers of rapidly occurring acoustic events. Such “sound textures” are often temporally homogeneous, and in many cases do not depend much on the precise arrangement of the component events, suggesting that they might be represented statistically. To test this idea and explore the statistics that might characterize natural sound textures, we designed an algorithm to synthesize sound textures from statistics extracted from real sounds. The algorithm is inspired by those used to synthesize visual textures, in which a set of statistical measurements from a real sound are imposed on a sample of noise. This process is iterated, and converges over time to a sound that obeys the chosen constraints. If the statistics capture the perceptually important properties of the texture in question, the synthesized result ought to sound like the original sound. We tested whether rudimentary statistics computed from the responses of a bank of bandpass filters could produce compelling synthetic textures. Simply matching the marginal statistics (variance, kurtosis) of individual filter responses was generally insufficient to yield good results, but imposing various joint envelope statistics (correlations between bands, and autocorrelations within each band) greatly improved the results, frequently producing synthetic textures that sounded natural and that subjects could reliably recognize. The results suggest that statistical representations could underlie sound texture perception, and that in many cases the auditory system may rely on fairly simple statistics to recognize real world sound textures. Joint work with Andrew Oxenham and Eero Simoncelli.

Pages

All events, 2010

Novel optogenetic tools for understanding emergent patterns in neural circuits

Lecture
Date:
Tuesday, January 5, 2010
Hour: 12:30
Location:
Jacob Ziskind Building
Dr. Ofer Yizhar
|
Stanford University, CA

Gamma oscillations are fast (30-80 Hz) rhythmic patterns of neural activity that have been proposed to support information processing in the brain. Gamma rhythms are altered in diseases such as schizophrenia and autism and are therefore of both basic and clinical interest. I have been developing optogenetic tools for light-based control over the activity of genetically defined neuronal populations. A new set of such tools, step function opsins (SFOs), are optimized for modulating the activity of neural circuits and ideal for observing emergent network properties. I will present the molecular engineering approach we used for developing these opsins and show new data on application of these tools to study the mechanisms underlying gamma oscillations in the prefrontal cortex. Some technological aspects will be discussed, with emphasis on the array of available optogenetic tools and how they might be improved to further extend the range of experiments feasible with these new techniques.

Theoretical models of grid cell dynamics and coding in the rat entorhinal cortex

Lecture
Date:
Monday, January 4, 2010
Hour: 11:00
Location:
Nella and Leon Benoziyo Building for Brain Research
Dr. Yoram Burak
|
Center for Brain Science Harvard University

Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space, and have been hypothesized to form the neural substrate for dead-reckoning. I will address two theoretical questions that arise from this remarkable experimental discovery: First, how is grid cell dynamics generated in the brain. Second, what information is conveyed in grid cell activity. In discussing the first question, I will focus on continuous-attractor models of grid cell activity, and ask whether such models can generate regular triangular grid responses based on inputs that encode only the rat's velocity and heading direction. In a recent work, we provided a proof of concept that such models can accurately integrate velocity inputs, along trajectories spanning 10-100 meters in length and lasting 1-10 minutes. The range of accurate integration depends on various properties of the continous-attractor network. After presenting these results, I will discuss possible experiments that may differentiate the continuous-attractor model from other proposed models, where activity arises independently in each cell. In the second part of the talk, I will examine the relationship between grid cell firing and rat location, asking what information is present in grid-cell activity about the rat's position. I will argue that, although the periodic response of grid cells may appear wasteful, the grid-cell code is in fact combinatorial in capacity, and allows for unambiguous position representations over ranges vastly larger than the ~0.5-10m periods of individual lattices. Further, the grid cell representation has properties that could facilitate the arithmetic computation involved in position updating during path integration. I will conclude by mentioning some of the implications for downstream readouts, and possible experimental tests.

Sound Texture Perception via Synthesis

Lecture
Date:
Sunday, January 3, 2010
Hour: 14:30
Location:
Nella and Leon Benoziyo Building for Brain Research
Dr. Josh McDermott
|
New York University

Many natural sounds, such as those produced by rainstorms, fires, and swarms of insects, result from large numbers of rapidly occurring acoustic events. Such “sound textures” are often temporally homogeneous, and in many cases do not depend much on the precise arrangement of the component events, suggesting that they might be represented statistically. To test this idea and explore the statistics that might characterize natural sound textures, we designed an algorithm to synthesize sound textures from statistics extracted from real sounds. The algorithm is inspired by those used to synthesize visual textures, in which a set of statistical measurements from a real sound are imposed on a sample of noise. This process is iterated, and converges over time to a sound that obeys the chosen constraints. If the statistics capture the perceptually important properties of the texture in question, the synthesized result ought to sound like the original sound. We tested whether rudimentary statistics computed from the responses of a bank of bandpass filters could produce compelling synthetic textures. Simply matching the marginal statistics (variance, kurtosis) of individual filter responses was generally insufficient to yield good results, but imposing various joint envelope statistics (correlations between bands, and autocorrelations within each band) greatly improved the results, frequently producing synthetic textures that sounded natural and that subjects could reliably recognize. The results suggest that statistical representations could underlie sound texture perception, and that in many cases the auditory system may rely on fairly simple statistics to recognize real world sound textures. Joint work with Andrew Oxenham and Eero Simoncelli.

Pages

All events, 2010

There are no events to display

All events, 2010

There are no events to display