Pages

February 18, 2016

  • Date:30TuesdayDecember 2025

    iSCAR Seminar

    More information
    Time
    09:00 - 10:00
    Title
    Wicked Lymphatics Shape the Epigenetic Landscape of Epithelial Stem Cell Plasticity
    Location
    Max and Lillian Candiotty Building
    Auditorium
    LecturerDr. Shiri Gur-Cohen
    Organizer
    Department of Immunology and Regenerative Biology
    Contact
    Lecture
  • Date:30TuesdayDecember 2025

    Chemical Evolution: How Can Chemistry Invent Biology?

    More information
    Time
    11:15 - 12:15
    Location
    Gerhard M.J. Schmidt Lecture Hall
    LecturerDr. Moran Frenkel-Pinter
    Organizer
    Department of Chemical and Structural Biology
    Lecture
  • Date:30TuesdayDecember 2025

    Vision and AI

    More information
    Time
    11:15 - 12:15
    Title
    Efficient representations for dense reasoning with long videos
    Location
    Jacob Ziskind Building
    Room 155 - חדר 155
    LecturerGreg Shakhnarovich
    TTI-C
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about In some video understanding scenarios, it is important to ca...»
    In some video understanding scenarios, it is important to capture details that exist at fine temporal resolution, over a significant length of context (hundreds, thousands and even tens of thousands of frames). This poses a computational challenge for many existing video encoders. I will discuss our recent efforts on developing models for video representation that address this challenge in two ways, each with a different kind of video task in mind. In our work on sign language understanding we extract information from each video frame in a highly selective way, and train the long context encoder from a large video corpus without any labels. The resulting video model, SHuBERT, is a "foundation model" for American Sign Language achieving state of the art performance on multiple sign language understanding tasks. In another, ongoing effort, we focus on the task of nonlinear movie editing, and develop an autoregressive model that relies on a highly compressed representation of video frames. This model, trained on an unlabeled corpus of movies, yields state of the art results on complex movie editing tasks and on editing-related video understanding benchmarks.
    Lecture
  • Date:30TuesdayDecember 2025

    Mathematical regularities of irregular neural codes for space

    More information
    Time
    12:30 - 13:30
    Location
    Gerhard M.J. Schmidt Lecture Hall
    LecturerProf. Yoram Burak
    Organizer
    Department of Brain Sciences
    Contact
    AbstractShow full text abstract about Much of the thinking about neural population codes was motiv...»
    Much of the thinking about neural population codes was motivated in the past decades by reports on neurons with highly stereotyped tuning functions. Indeed, neurons are often observed to exhibit smooth, unimodal tuning to an encoded variable, centered around preferred stimuli that vary across the neural population. Recent experiments, however, have uncovered neural response functions that are much less stereotyped and regular than observed previously. Some of the most striking examples have been observed in the hippocampus and its associated brain areas. The classical view has been that hippocampal place cells are active only in a compact region of space and exhibit a stereotyped tuning to position. In contrast to this expectation, however, place cells in large environments typically fire in multiple locations, and the multiple firing fields of individual cells, as well as those of the whole population, vary in size and shape. We recently discovered that a remarkably simple mathematical model, in which firing fields are generated by thresholding a realization of a random Gaussian process, accounts for the statistical properties of place fields in precise quantitative detail. The model captures the statistics of field sizes and positions, and generates new quantitative predictions on the statistics of field shapes and topologies. These predictions are quantitatively verified in multiple recent data sets from bats and rodents, in one, two, and three dimensions, in both small and large environments. Together, these results imply that common mechanisms underlie the diverse statistics observed in the different experiments. I will discuss a mechanistic model, which suggests that the random Gaussian process statistics arise due to random connectivity within the CA3-CA1 hippocampal circuit. If time permits I will present another recent work, in which we uncover simple principles underlying the spatial selectivity in a new class of neurons in the medial entorhinal cortex, with tuning curves that are significantly less regular than those of classical grid cells. 
    Lecture
  • Date:30TuesdayDecember 2025

    The Clore Center for Biological Physics

    More information
    Time
    13:15 - 14:30
    Title
    The Physics of Learnable Data
    Location
    Nella and Leon Benoziyo Physics Library
    LecturerDr. Noam Itzhak Levi
    LUNCH AT 12:45
    Contact
    AbstractShow full text abstract about The power of physics lies in its ability to use simple model...»
    The power of physics lies in its ability to use simple models to predict the behavior of highly complex systems — allowing us to ignore microscopic details or, conversely, to explain macroscopic phenomena through minimal constituents. In this seminar, I will explore how these physical principles of universality and reductionism extend beyond the natural universe to the space of generative models and natural data.I will begin by discussing major open problems in modern machine learning where a physics perspective is particularly impactful. Focusing on the role of data in the learning process, I will first examine the "Gaussian" approximation of real-world datasets, which is widely used in theoretical calculations. I will then argue that truly understanding generative models (such as diffusion and language models) requires characterizing the non-trivial latent structure of their training data, shifting the problem from networks to data.I will present a simple yet predictive hierarchical generative model of data, and demonstrate how this hierarchical structure can be probed using diffusion models and observables drawn from statistical physics. Finally, I will discuss future prospects, connecting hierarchical compositionality to semantic structures in natural language and looking beyond the diffusion paradigm.
    Lecture
  • Date:31WednesdayDecember 2025

    Machine Learning and Statistics Seminar

    More information
    Time
    11:15 - 12:15
    Title
    Demystifying Grokking: Criticality and Dynamics in Minimal Models
    Location
    Jacob Ziskind Building
    Lecture Hall - Room 1 - אולם הרצאות חדר 1
    LecturerNoam Levi
    EPFL
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about The equivalence between vastly different complex systems all...»
    The equivalence between vastly different complex systems allows physcisits to make predictions without analyzing their microscopic details. Conversely, by reducing systems to their minimal constituents, we can describe phenomena that seem inscrutable. I will apply these principles to the synthetic world of neural networks, taking grokking, or delayed generalization, as a case study.

    While often attributed to complex representation learning, I will show that grokking arises in simple, analytically tractable settings: linear teacher-student models and logistic regression. By explicitly solving the gradient flow dynamics, we identify the ratio of input dimension to training samples (λ=d/N) as the control parameter of a phase transition from overfitting to generalization. I will present two distinct mechanisms for grokking driven by this criticality. In regression, it appears as a divergence in "grokking time" near the interpolation peak. In classification, it emerges at the phase transition between linearly separable and inseparable data, caused by the divergence of the separating hyperplane. Ultimately, I will illustrate how viewing training dynamics through the lens of statistical physics and critical phenomena provides a simple solution for the mystery of delayed generalization.
    Lecture
  • Date:31WednesdayDecember 2025

    Life Sciences Luncheon

    More information
    Time
    12:30 - 14:00
    Title
    Prof. Rotem Sorek
    Location
    Nella and Leon Benoziyo Building for Biological Sciences
    Auditorium
    LecturerDr. Andrei Reznikov
    Lecture
  • Date:31WednesdayDecember 2025

    Special Guest Seminar with Prof. Itai Yanai

    More information
    Time
    14:30 - 15:30
    Location
    Arthur and Rochelle Belfer Building for Biomedical Research
    Botnar Auditorium
    LecturerProf. Itai Yanai
    Lecture
  • Date:01ThursdayJanuary 2026

    Vision and AI

    More information
    Time
    12:15 - 13:15
    Title
    Bridging Generative Models and Visual Communication
    Location
    Jacob Ziskind Building
    Lecture Hall - Room 1 - אולם הרצאות חדר 1
    LecturerYael Vinker
    MIT
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about From rough sketches that spark ideas to polished illustratio...»
    From rough sketches that spark ideas to polished illustrations that explain complex concepts, visual communication is central to how humans think, create, and share knowledge. Yet despite major advances in generative AI, we are still far from models that can reason and communicate through visual forms.

    I will present my work on bridging generative models and visual communication, focusing on three complementary domains: (1) algorithms for generating and understanding sketches, (2) systems that support exploratory visual creation beyond one-shot generation, and (3) methods for producing editable, parametric images for design applications.

    These domains pose unique challenges: they are inherently data-scarce and rely on representations that go beyond pixel-based images commonly used in standard models. I will show how the rich priors of vision-language models can be leveraged to address these challenges through novel optimization objectives and regularization techniques that connect their learned features with the specialized representations required for visual communication.

    Looking ahead, this research lays the foundation for general-purpose visual communication technologies: intelligent systems that collaborate with humans in visual domains, enhancing how we design, learn, and exchange knowledge.

    Bio:

    Yael Vinker is a Postdoctoral Associate at MIT CSAIL, working with Prof. Antonio Torralba. She received her Ph.D. in Computer Science from Tel Aviv University, advised by Profs. Daniel Cohen-Or and Ariel Shamir. Her research spans computer graphics, computer vision, and machine learning, with a focus on generative models for visual communication. Her work has been recognized with two Best Paper Awards (SIGGRAPH 2022, SIGGRAPH Asia 2023) and a Best Paper Honorable Mention (SIGGRAPH 2023). She was selected as an MIT EECS Rising Star (2024) and received the Blavatnik Prize for Outstanding Israeli Doctoral Students in Computer Science (2024) as well as the VATAT Ph.D. Fellowship.
    Lecture
  • Date:01ThursdayJanuary 2026

    Geometric Functional Analysis and Probability Seminar

    More information
    Time
    13:30 - 14:30
    Title
    The limiting law of the Discrete Gaussian level-lines
    Location
    Jacob Ziskind Building
    Room 155 - חדר 155
    LecturerEyal Lubetzky
    NYU
    Organizer
    Department of Mathematics
    Contact
    AbstractShow full text abstract about We will present a recent result with Joe Chen on the low tem...»
    We will present a recent result with Joe Chen on the low temperature (2+1)D integer-valued Discrete Gaussian (ZGFF) model. The level lines were conjectured to have cube-root fluctuations near the sides of the box, mirroring the Solid-On-Solid picture. The new results resolve this and further recover the joint limit law of the top level-lines near the sides of the box, which is a product of Ferrari—Spohn diffusions.
    Lecture
  • Date:01ThursdayJanuary 2026

    Tumor Innervation as a Novel Therapeutic Target

    More information
    Time
    14:00 - 15:00
    Location
    Candiotty
    Auditorium
    LecturerProf. Ronny Drapkin
    Organizer
    Dwek Institute for Cancer Therapy Research
    Lecture
  • Date:04SundayJanuary 2026

    Foundations of Computer Science Seminar

    More information
    Time
    11:00 - 12:15
    Title
    Efficient LLM Systems: From Algorithm Design to Deployment
    Location
    Elaine and Bram Goldsmith Building for Mathematics and Computer Sciences
    Room 108 - חדר 108
    LecturerRana Shahout
    Harvard University
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about Large Language Models (LLMs) have transformed what machines ...»
    Large Language Models (LLMs) have transformed what machines can do and how systems are designed to serve them. These models are both computationally and memory demanding, revealing the limits of traditional optimization methods that once sufficed for conventional systems. A central challenge in building LLM systems is improving system metrics while ensuring response quality.

    This talk presents approaches for reducing latency in LLM systems to support interactive applications, from scheduling algorithm design to deployment. It introduces scheduling frameworks that use lightweight predictions of request behavior to make informed decisions about prioritization and memory management across two core settings: standalone LLM inference and API-augmented LLMs that interact with external tools. Across both settings, prediction-guided scheduling delivers substantial latency reductions while remaining practical for deployment.
    Lecture
  • Date:05MondayJanuary 2026

    A gut sense for microbes

    More information
    Time
    15:30 - 16:30
    Location
    Benoziyo Brain science building,
    Seminar room 113
    LecturerM. Maya Kaelberer, Ph.D.
    Organizer
    Department of Brain Sciences
    Contact
    AbstractShow full text abstract about To coexist with our resident microbiota we must possess the ...»
    To coexist with our resident microbiota we must possess the ability to sense them and adjust our behavior. While the intestine is known to transduce nutrient signals to the brain to guide appetite, the mechanisms by which the host responds in real time to resident gut microbes have remained undefined. We found that specific colonic neuropod cells detect ubiquitous microbial signatures and communicate directly with vagal neurons to regulate feeding behavior. This pathway operates independently of immune or metabolic responses and suggests the host possesses a dedicated sensory circuit to maintain equilibrium. We call this sense at the interface of the biota and the brain the neurobiotic sense.
    Lecture
  • Date:07WednesdayJanuary 2026

    Deciphering molecular heterogeneity in tumors with increased EGFR expression towards -individualized treatments

    More information
    Time
    14:00 - 15:00
    Location
    Max and Lillian Candiotty Building
    Auditorium
    LecturerDr. Maria Jubran-Khoury, DMD, PhD
    Organizer
    Dwek Institute for Cancer Therapy Research
    Lecture
  • Date:08ThursdayJanuary 2026

    New insights from spatial Metabolomics

    More information
    Time
    09:00 - 10:00
    Location
    Candiotty Auditorium
    LecturerDr. Uwe Heinig
    Organizer
    Department of Life Sciences Core Facilities
    Lecture
  • Date:08ThursdayJanuary 2026

    Vision and AI

    More information
    Time
    12:15 - 13:15
    Title
    Model circuits interpretability, and the road to scale it up
    Location
    Jacob Ziskind Building
    Lecture Hall - Room 1 - אולם הרצאות חדר 1
    LecturerYaniv Nikankin
    Technion
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about In this talk, we will explore circuit analysis for interpret...»
    In this talk, we will explore circuit analysis for interpreting neural network models. After some background on the paradigm and techniques of circuit analysis, I'll present two (and a half) research studies demonstrating the breadth of these interpretability methods.

    We will explore how this paradigm can help gain scientific insights into how neural network models operate, exemplified in the first work ("Arithmetic without Algorithms", https://technion-cs-nlp.github.io/llm-arithmetic-heuristics) where we use circuit analysis to reveal how language models solve arithmetic prompts. We will also show that circuit analysis can reveal findings on neural network models and help fix existing problems in them --- specifically targeting the issue of poor performance of VLMs on visual tasks compared to equivalent textual tasks (done in the work "Same Task, Different Circuits", https://technion-cs-nlp.github.io/vlm-circuits-analysis). Lastly, if time permits, we will discuss some current directions for future and ongoing work, mainly on scaling circuit analysis to complex tasks.

    Bio:

    Yaniv Nikankin is a PhD student at the Technion, working with Yonatan Belinkov. His work focuses on interpretability of neural networks, with a recent focus on scaling to analysis of long-form complex tasks. He is particularly excited about cross-domain applications of interpretability in scientific fields, for goals such as better understanding of scientific foundation models such as pLMs. Yaniv is a recipient of the Israeli Higher Education (VATAT) fellowship.
    Lecture
  • Date:08ThursdayJanuary 2026

    Geometric Functional Analysis and Probability Seminar

    More information
    Time
    13:30 - 14:30
    Title
    TBD
    Location
    Jacob Ziskind Building
    Room 155 - חדר 155
    LecturerAdva Mond
    King's College
    Organizer
    Faculty of Mathematics and Computer Science
    Contact
    Lecture
  • Date:08ThursdayJanuary 2026

    Challenges in CAR T cell therapy in hematologic malignancies and beyond

    More information
    Time
    14:00 - 15:00
    Location
    Candiotty
    Auditorium
    LecturerProf. Elad Jacoby
    Organizer
    Dwek Institute for Cancer Therapy Research
    Lecture
  • Date:10SaturdayJanuary 202601ThursdayJanuary 2026

    Vision and AI

    More information
    Time
    12:15 - 13:15
    Title
    Bridging Generative Models and Visual Communication
    Location
    Jacob Ziskind Building
    Lecture Hall - Room 1 - אולם הרצאות חדר 1
    LecturerYael Vinker
    MIT
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about From rough sketches that spark ideas to polished illustratio...»
    From rough sketches that spark ideas to polished illustrations that explain complex concepts, visual communication is central to how humans think, create, and share knowledge. Yet despite major advances in generative AI, we are still far from models that can reason and communicate through visual forms.

    I will present my work on bridging generative models and visual communication, focusing on three complementary domains: (1) algorithms for generating and understanding sketches, (2) systems that support exploratory visual creation beyond one-shot generation, and (3) methods for producing editable, parametric images for design applications.

    These domains pose unique challenges: they are inherently data-scarce and rely on representations that go beyond pixel-based images commonly used in standard models. I will show how the rich priors of vision-language models can be leveraged to address these challenges through novel optimization objectives and regularization techniques that connect their learned features with the specialized representations required for visual communication.

    Looking ahead, this research lays the foundation for general-purpose visual communication technologies: intelligent systems that collaborate with humans in visual domains, enhancing how we design, learn, and exchange knowledge.

    Bio:

    Yael Vinker is a Postdoctoral Associate at MIT CSAIL, working with Prof. Antonio Torralba. She received her Ph.D. in Computer Science from Tel Aviv University, advised by Profs. Daniel Cohen-Or and Ariel Shamir. Her research spans computer graphics, computer vision, and machine learning, with a focus on generative models for visual communication. Her work has been recognized with two Best Paper Awards (SIGGRAPH 2022, SIGGRAPH Asia 2023) and a Best Paper Honorable Mention (SIGGRAPH 2023). She was selected as an MIT EECS Rising Star (2024) and received the Blavatnik Prize for Outstanding Israeli Doctoral Students in Computer Science (2024) as well as the VATAT Ph.D. Fellowship.
    Lecture
  • Date:10SaturdayJanuary 202601ThursdayJanuary 2026

    Vision and AI

    More information
    Time
    12:15 - 13:15
    Title
    Bridging Generative Models and Visual Communication
    Location
    Jacob Ziskind Building
    Lecture Hall - Room 1 - אולם הרצאות חדר 1
    LecturerYael Vinker
    MIT
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about From rough sketches that spark ideas to polished illustratio...»
    From rough sketches that spark ideas to polished illustrations that explain complex concepts, visual communication is central to how humans think, create, and share knowledge. Yet despite major advances in generative AI, we are still far from models that can reason and communicate through visual forms.

    I will present my work on bridging generative models and visual communication, focusing on three complementary domains: (1) algorithms for generating and understanding sketches, (2) systems that support exploratory visual creation beyond one-shot generation, and (3) methods for producing editable, parametric images for design applications.

    These domains pose unique challenges: they are inherently data-scarce and rely on representations that go beyond pixel-based images commonly used in standard models. I will show how the rich priors of vision-language models can be leveraged to address these challenges through novel optimization objectives and regularization techniques that connect their learned features with the specialized representations required for visual communication.

    Looking ahead, this research lays the foundation for general-purpose visual communication technologies: intelligent systems that collaborate with humans in visual domains, enhancing how we design, learn, and exchange knowledge.

    Bio:

    Yael Vinker is a Postdoctoral Associate at MIT CSAIL, working with Prof. Antonio Torralba. She received her Ph.D. in Computer Science from Tel Aviv University, advised by Profs. Daniel Cohen-Or and Ariel Shamir. Her research spans computer graphics, computer vision, and machine learning, with a focus on generative models for visual communication. Her work has been recognized with two Best Paper Awards (SIGGRAPH 2022, SIGGRAPH Asia 2023) and a Best Paper Honorable Mention (SIGGRAPH 2023). She was selected as an MIT EECS Rising Star (2024) and received the Blavatnik Prize for Outstanding Israeli Doctoral Students in Computer Science (2024) as well as the VATAT Ph.D. Fellowship.
    Lecture

Pages