Dr. Yarden Cohen
The neural language of song
You can learn a lot from a songbird’s brain. Canaries, for example, are virtuosos in the realm of bird song. Their acquisition of song is innate, and the songs they sing are as complex—in terms of vocal timing and syntax—as human language. For neuroscientists like Dr. Yarden Cohen, who seek to understand how such a versatile and adaptable behavior as speech emerges from unreliable neurons and flexible neural connections, canaries provide an excellent model to complement work in humans.
Scientists have long used songbirds as a model system for investigating how humans acquire skills such as language. Birdsong consists of highly stereotyped gestures—such as trills and chirps—performed in sequence, just like motor skills such as speaking. In fact, in many ways, the process by which juvenile birds learn their songs (from adult ‘tutors’) mirrors the process by which human babies learn to speak.
An expert in acoustics, behavior, learning, and memory, Dr. Cohen has developed an innovative model of human language acquisition—song learning in canaries—that overcomes many of the drawbacks neuroscientists have faced when trying to model such a complex motor activity, such as establishing appropriate time scales and determining syntactic structure. He recently joined the Department of Brain Sciences.
As a postdoctoral fellow at Boston University, Dr. Cohen co-developed the first algorithm for precision birdsong annotation that can be used for fully automated analyses of a variety of birdsong features, even those as complex as a canary’s. Called ‘TweetyNet,’ this avian AI tool will make it possible for scientists to ask a range of questions about intricate aspects of birdsong—opening a trove of new directions for working with canaries—and then apply their answers to understanding sensorimotor learning in general.
At the Weizmann Institute, he plans to characterize the role of memory in maintaining and adapting canary song syntax, and establish models of flexible, motor sequence generation. His work will integrate TweetyNet’s machine-learning algorithms with behavioral, imaging, electrophysiology, and microscopy approaches. He will also leverage insights from canaries to understand human language. In addition, he plans to work with neurosurgeons to study neural activity related to language and speech in patients undergoing brain surgery—as he did during a post-postdoctoral research fellowship at Massachusetts General Hospital—and to develop models of the circuits involved in these processes.
Born on a kibbutz near the Jordan River, Dr. Cohen earned his BSc in physics and mathematics as part of the Talpiot excellence program at the Hebrew University of Jerusalem, and earned both his MSc and PhD in neurobiology at the Weizmann Institute, under the supervision of Prof. Rony Paz and Prof. Elad Schneidman. He completed a postdoctoral fellowship at Boston University, as well as a research fellowship at the Massachusetts General Hospital/Harvard Medical School.
Dr. Cohen plays the piano—a skill picked up as an adult studying acoustics. He and his wife Karin, who hails from Sweden, speak Hebrew, Swedish, and English with their two children.