Pages

December 01, 2014

  • Date:24WednesdayDecember 2025

    2025-2026 Spotlight on Science Seminar Series - Dr. Jacques Pienaar (Department of Physics Core Facilities)

    More information
    Time
    12:30 - 14:00
    Title
    Illuminating the Dark: The Search for Dark Matter
    Location
    Gerhard M.J. Schmidt Lecture Hall
    LecturerJacques Pienaar
    Contact
    AbstractShow full text abstract about Cosmological observations suggest that about 85% of the univ...»
    Cosmological observations suggest that about 85% of the universe’s mass is made up of matter that neither emits nor absorbs light. The existence of this mysterious component—dark matter—is inferred from its gravitational effects and is theorized to interact only very weakly with ordinary matter. The XENON detector, located deep underground in Italy’s Gran Sasso Laboratory, employs a large reservoir of ultrapure liquid xenon to search for the faint signals produced when a dark matter particle collides with a xenon atom. By suppressing background radiation and using highly sensitive sensors, the experiment strives to observe these extremely rare events. Although dark matter remains undetected, XENON continues to search while also shaping future searches.
    Lecture
  • Date:25ThursdayDecember 2025

    Special Physics Colloquium

    More information
    Time
    12:30 - 14:00
    Title
    Is there turbulence in the deep ocean?
    Location
    Physics Weissman Auditorium
    AbstractShow full text abstract about Short answer: Yes. One might imagine the deep ocean as a dar...»
    Short answer: Yes. One might imagine the deep ocean as a dark, silent world, largely untouched by the restless motion seen at the surface, where winds raise waves and storms stir the sea. However, just as surface waves exist along the sharp density interface between the ocean and the atmosphere, internal waves are supported by smooth vertical gradients in density far beneath the ocean's surface. The turbulence of these waves plays a central role in ocean mixing and circulation.I will introduce surface and internal waves as examples of dispersive wave systems, and explain how their long-time dynamics can be described using the theory of weak wave turbulence. I will then present our recent work, which addresses a long-standing problem in geophysical fluid dynamics: deriving the observed broadband oceanic spectrum of internal waves, known as the Garrett-Munk spectrum, directly from the governing equations.The central message of the talk is that the weak-rotation limit is singular, and that it is precisely this singular limit that allows the oceanic spectrum to emerge from first principles.No background in geophysical fluid dynamics will be assumed.
    Colloquia
  • Date:25ThursdayDecember 2025

    Geometric Functional Analysis and Probability Seminar

    More information
    Time
    13:30 - 14:30
    Title
    Statistical properties of Markov shifts
    Location
    Jacob Ziskind Building
    Room 155 - חדר 155
    LecturerYeor Hafouta
    Florida
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about The central limit theorem (CLT) and related results for stat...»
    The central limit theorem (CLT) and related results for stationary weakly dependent sequences of random variables have been extensively studied in the past century, starting from a pioneering work of Berenstien (1927).  However, in many physical  phenomena there are external forces, measurement errors and unknown variables (e.g. storms, the observer effect, the uncertainty principle etc.). This means that the local laws of physics depend on time, and it leads us to studying non-stationary sequences. 

    The asymptotic behaviour of non-stationary sequences have been studied extensively in the past decades, but it is still developing compared with the theory of stationary processes. In this talk we will focus on inhomogeneous Markov chains. For sufficiently well contracting Markov chains the CLT was first proven by Dobrushin (1956). Since then many results were proven for stationary chains. In 2021 Dolgopyat and Sarig proved local central limit theorems (LCLT) for inhomogeneous Markov chains. In 2022 Dolgopyat and H proved optimal CLT rates in Dobrusin's CLT. These results closed a big gap in literature concerning the non-stationary case. 

    An open problem raised by Dolgopyat and Sarig in their 2021 book concerns limit theorems for Markov shifts, that is when the underlying sequence of functions that forms the partial sums depend on the entire path of the chain. Two circumstances where such dependence arises are products of random matrices and random iterated functions, and there are many other instances when the functionals depend on the entire path. 

    In this talk we will present our solution to the above problem. More precisely, we prove CLT, optimal CLT rates and LCLT for a wide class of sufficiently well mixing Markov chains and functionals with infinite memory. Even though the inhomogeneous case is more complicated, our results seem to be new already for stationary chains.
    Lecture
  • Date:25ThursdayDecember 2025

    Apoptotic Pathways as Molecular Switches of Tumor Initiation and Reversion

    More information
    Time
    14:00 - 15:00
    Location
    Candiotty
    Auditorium
    LecturerProf. Sarit Larisch
    Organizer
    Dwek Institute for Cancer Therapy Research
    Lecture
  • Date:25ThursdayDecember 2025

    Tracking the emergence of intentions in the human motor cortex- evidence from intracranial neuronal recordings

    More information
    Time
    14:00 - 15:00
    Location
    Gerhard M.J. Schmidt Lecture Hall
    LecturerUri Maoz, PhD
    Organizer
    Department of Brain Sciences
    Contact
    AbstractShow full text abstract about Abstract: How voluntary, self-paced intentions emerge in the...»
    Abstract: How voluntary, self-paced intentions emerge in the brain and translate into action remains one of the most fundamental open questions in neuroscience. Leveraging rare access to intracranial neuronal recordings from human motor cortex, we built a real-time, online closed-loop system that allowed us to study the formation of voluntary actions under competitive conditions.We show that participants have only limited capacity to voluntarily steer their motor-cortex activity when doing so is strategically advantageous-revealing tight constraints on intentional control at the neural population level. Yet the commitment to act can be decoded reliably from motor-cortex activity roughly 250 ms before movement onset, at a time point when participants report already being consciously aware of their decision. We also find that brain–computer interfaces trained in one cognitive context transfer seamlessly to another, despite substantial differences in neural trajectories and force profiles-suggesting a shared underlying representational structure for volitional actions in motor cortex.Offline analyses further uncovered the specific neural patterns that signal commitment to action, shedding new light on how early voluntary actions can be reliably predicted from motor-cortex activity. We will conclude by discussing how these and related results inform emerging efforts to track and interpret intentions in advanced AI systems (ai-intentions.org).
    Lecture
  • Date:28SundayDecember 2025

    Scientific Council Meeting

    More information
    Time
    09:38 - 10:38
    Title
    PhD hcהנשיא - בהשתתפות Ceremony for new members of the SC + Council of Prof.
    Location
    The David Lopatie Conference Centre
    KIMEL
    Contact
    Academic Events
  • Date:28SundayDecember 2025

    The Clore Center for Biological Physics

    More information
    Time
    13:15 - 14:30
    Title
    Anticipatory and Responsive Regulation of Blood Glucose Levels
    Location
    Nella and Leon Benoziyo Physics Library
    LecturerDr. Danny Ben-Zvi
    Lunch at 12:45
    Contact
    AbstractShow full text abstract about Glucose can enter the blood following a meal, and/or can be ...»
    Glucose can enter the blood following a meal, and/or can be produced by the liver and kidneys at times of need such as fasting. An elevation in blood glucose beyond steady state levels leads to secretion of the hormone insulin, leading to increase in glucose uptake into muscle and adipose tissues. Diabetes Mellitus arises when insufficient levels of insulin are secreted into the blood, manifesting as a chronic elevation in blood glucose levels. A reduction in glucose levels can lead to secretion of a large number of hormones, such as glucagon, cortisol and adrenaline, which cause endogenous glucose production and secretion into the blood,  maintaining homeostasis of glucose levels. In this talk we will use mathematical modeling and biochemical measurements to study the dynamics of hormone secretion in healthy individuals and Diabetes patients, and (hopefully) provide an answer to a key question: does the "body" measure glucose levels and regulates glucose levels accordingly by secreting insulin/glucose, as expected by a standard negative feedback system, or does it estimate future glucose levels and secretes hormones/glucose in a feedforward mechanism?Students interested in meeting the speaker after the seminar may sign up here:LINKFOR THE LATEST UPDATES AND CONTENT ON SOFT MATTER AND BIOLOGICAL PHYSICS AT THE WEIZMANN, VISIT OUR WEBSITE: https://www.bio
    Lecture
  • Date:29MondayDecember 2025

    PhD Defense Seminar- Ofir Kuperman

    More information
    Time
    10:00 - 11:00
    Title
    Deciphering Sugar Uptake, Transport and Incorporation Mechanisms by Plant Tissues in the Context of Material Farming
    Location
    Nella and Leon Benoziyo Building for Plant and Environmental Sciences
    691
    Contact
    Lecture
  • Date:29MondayDecember 2025

    Foundations of Computer Science Seminar

    More information
    Time
    11:15 - 12:15
    Title
    From Learning Theory to Cryptography: Provable Guarantees for AI
    Location
    Jacob Ziskind Building
    Room 1 - 1 חדר
    LecturerJonathan Shafer
    MIT
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about Ensuring that AI systems behave as intended is a central cha...»
    Ensuring that AI systems behave as intended is a central challenge in contemporary AI. This talk offers an exposition of provable mathematical guarantees for learning and security in AI systems.

    Starting with a classic learning-theoretic perspective on generalization guarantees, we present two results quantifying the amount of training data that is provably necessary and sufficient for learning: (1) In online learning, we show that access to unlabeled data can reduce the number of prediction mistakes quadratically, but no more than quadratically [NeurIPS23, NeurIPS25 Best Paper Runner-Up]. (2) In statistical learning, we discuss how much labeled data is actually necessary for learning—resolving a long-standing gap left open by the celebrated VC theorem [COLT23].

    Provable guarantees are especially valuable in settings that require security in the face of malicious adversaries. The main part of the talk adopts a cryptographic perspective,  showing how to: (1) Utilize interactive proof systems to delegate data collection and AI training tasks to an untrusted party [ITCS21, COLT23, NeurIPS25]. (2) Leverage random self-reducibility to provably remove backdoors from AI models, even when those backdoors are themselves provably undetectable [STOC25].

    The talk concludes with an exploration of future directions concerning generalization in generative models, and AI alignment against malicious and deceptive AI.
    Lecture
  • Date:30TuesdayDecember 2025

    iSCAR Seminar

    More information
    Time
    09:00 - 10:00
    Title
    Wicked Lymphatics Shape the Epigenetic Landscape of Epithelial Stem Cell Plasticity
    Location
    Max and Lillian Candiotty Building
    Auditorium
    LecturerDr. Shiri Gur-Cohen
    Organizer
    Department of Immunology and Regenerative Biology
    Contact
    Lecture
  • Date:30TuesdayDecember 2025

    Chemical Evolution: How Can Chemistry Invent Biology?

    More information
    Time
    11:15 - 12:15
    Location
    Gerhard M.J. Schmidt Lecture Hall
    LecturerDr. Moran Frenkel-Pinter
    Organizer
    Department of Chemical and Structural Biology
    Lecture
  • Date:30TuesdayDecember 2025

    Vision and AI

    More information
    Time
    11:15 - 12:15
    Title
    Efficient representations for dense reasoning with long videos
    Location
    Jacob Ziskind Building
    Room 155 - חדר 155
    LecturerGreg Shakhnarovich
    TTI-C
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about In some video understanding scenarios, it is important to ca...»
    In some video understanding scenarios, it is important to capture details that exist at fine temporal resolution, over a significant length of context (hundreds, thousands and even tens of thousands of frames). This poses a computational challenge for many existing video encoders. I will discuss our recent efforts on developing models for video representation that address this challenge in two ways, each with a different kind of video task in mind. In our work on sign language understanding we extract information from each video frame in a highly selective way, and train the long context encoder from a large video corpus without any labels. The resulting video model, SHuBERT, is a "foundation model" for American Sign Language achieving state of the art performance on multiple sign language understanding tasks. In another, ongoing effort, we focus on the task of nonlinear movie editing, and develop an autoregressive model that relies on a highly compressed representation of video frames. This model, trained on an unlabeled corpus of movies, yields state of the art results on complex movie editing tasks and on editing-related video understanding benchmarks.
    Lecture
  • Date:30TuesdayDecember 2025

    Mathematical regularities of irregular neural codes for space

    More information
    Time
    12:30 - 13:30
    Location
    Gerhard M.J. Schmidt Lecture Hall
    LecturerProf. Yoram Burak
    Organizer
    Department of Brain Sciences
    Contact
    AbstractShow full text abstract about Much of the thinking about neural population codes was motiv...»
    Much of the thinking about neural population codes was motivated in the past decades by reports on neurons with highly stereotyped tuning functions. Indeed, neurons are often observed to exhibit smooth, unimodal tuning to an encoded variable, centered around preferred stimuli that vary across the neural population. Recent experiments, however, have uncovered neural response functions that are much less stereotyped and regular than observed previously. Some of the most striking examples have been observed in the hippocampus and its associated brain areas. The classical view has been that hippocampal place cells are active only in a compact region of space and exhibit a stereotyped tuning to position. In contrast to this expectation, however, place cells in large environments typically fire in multiple locations, and the multiple firing fields of individual cells, as well as those of the whole population, vary in size and shape. We recently discovered that a remarkably simple mathematical model, in which firing fields are generated by thresholding a realization of a random Gaussian process, accounts for the statistical properties of place fields in precise quantitative detail. The model captures the statistics of field sizes and positions, and generates new quantitative predictions on the statistics of field shapes and topologies. These predictions are quantitatively verified in multiple recent data sets from bats and rodents, in one, two, and three dimensions, in both small and large environments. Together, these results imply that common mechanisms underlie the diverse statistics observed in the different experiments. I will discuss a mechanistic model, which suggests that the random Gaussian process statistics arise due to random connectivity within the CA3-CA1 hippocampal circuit. If time permits I will present another recent work, in which we uncover simple principles underlying the spatial selectivity in a new class of neurons in the medial entorhinal cortex, with tuning curves that are significantly less regular than those of classical grid cells. 
    Lecture
  • Date:30TuesdayDecember 2025

    The Clore Center for Biological Physics

    More information
    Time
    13:15 - 14:30
    Title
    The Physics of Learnable Data
    Location
    Nella and Leon Benoziyo Physics Library
    LecturerDr. Noam Itzhak Levi
    LUNCH AT 12:45
    Contact
    AbstractShow full text abstract about The power of physics lies in its ability to use simple model...»
    The power of physics lies in its ability to use simple models to predict the behavior of highly complex systems — allowing us to ignore microscopic details or, conversely, to explain macroscopic phenomena through minimal constituents. In this seminar, I will explore how these physical principles of universality and reductionism extend beyond the natural universe to the space of generative models and natural data.I will begin by discussing major open problems in modern machine learning where a physics perspective is particularly impactful. Focusing on the role of data in the learning process, I will first examine the "Gaussian" approximation of real-world datasets, which is widely used in theoretical calculations. I will then argue that truly understanding generative models (such as diffusion and language models) requires characterizing the non-trivial latent structure of their training data, shifting the problem from networks to data.I will present a simple yet predictive hierarchical generative model of data, and demonstrate how this hierarchical structure can be probed using diffusion models and observables drawn from statistical physics. Finally, I will discuss future prospects, connecting hierarchical compositionality to semantic structures in natural language and looking beyond the diffusion paradigm.
    Lecture
  • Date:31WednesdayDecember 2025

    Life Sciences Luncheon

    More information
    Time
    12:30 - 14:00
    Title
    Prof. Rotem Sorek
    Location
    Nella and Leon Benoziyo Building for Biological Sciences
    Auditorium
    LecturerDr. Andrei Reznikov
    Lecture
  • Date:31WednesdayDecember 2025

    Special Guest Seminar with Prof. Itai Yanai

    More information
    Time
    14:30 - 15:30
    Location
    Arthur and Rochelle Belfer Building for Biomedical Research
    Botnar Auditorium
    LecturerProf. Itai Yanai
    Lecture
  • Date:01ThursdayJanuary 2026

    Vision and AI

    More information
    Time
    12:15 - 13:15
    Title
    Bridging Generative Models and Visual Communication
    Location
    Jacob Ziskind Building
    Lecture Hall - Room 1 - אולם הרצאות חדר 1
    LecturerYael Vinker
    MIT
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about From rough sketches that spark ideas to polished illustratio...»
    From rough sketches that spark ideas to polished illustrations that explain complex concepts, visual communication is central to how humans think, create, and share knowledge. Yet despite major advances in generative AI, we are still far from models that can reason and communicate through visual forms.

    I will present my work on bridging generative models and visual communication, focusing on three complementary domains: (1) algorithms for generating and understanding sketches, (2) systems that support exploratory visual creation beyond one-shot generation, and (3) methods for producing editable, parametric images for design applications.

    These domains pose unique challenges: they are inherently data-scarce and rely on representations that go beyond pixel-based images commonly used in standard models. I will show how the rich priors of vision-language models can be leveraged to address these challenges through novel optimization objectives and regularization techniques that connect their learned features with the specialized representations required for visual communication.

    Looking ahead, this research lays the foundation for general-purpose visual communication technologies: intelligent systems that collaborate with humans in visual domains, enhancing how we design, learn, and exchange knowledge.

    Bio:

    Yael Vinker is a Postdoctoral Associate at MIT CSAIL, working with Prof. Antonio Torralba. She received her Ph.D. in Computer Science from Tel Aviv University, advised by Profs. Daniel Cohen-Or and Ariel Shamir. Her research spans computer graphics, computer vision, and machine learning, with a focus on generative models for visual communication. Her work has been recognized with two Best Paper Awards (SIGGRAPH 2022, SIGGRAPH Asia 2023) and a Best Paper Honorable Mention (SIGGRAPH 2023). She was selected as an MIT EECS Rising Star (2024) and received the Blavatnik Prize for Outstanding Israeli Doctoral Students in Computer Science (2024) as well as the VATAT Ph.D. Fellowship.
    Lecture
  • Date:01ThursdayJanuary 2026

    Geometric Functional Analysis and Probability Seminar

    More information
    Time
    13:30 - 14:30
    Title
    TBD
    Location
    Jacob Ziskind Building
    Room 155 - חדר 155
    LecturerEyal Lubetzky
    NYU
    Organizer
    Faculty of Mathematics and Computer Science
    Contact
    Lecture
  • Date:01ThursdayJanuary 2026

    Tumor Innervation as a Novel Therapeutic Target

    More information
    Time
    14:00 - 15:00
    Location
    Candiotty
    Auditorium
    LecturerProf. Ronny Drapkin
    Organizer
    Dwek Institute for Cancer Therapy Research
    Lecture
  • Date:04SundayJanuary 2026

    Foundations of Computer Science Seminar

    More information
    Time
    11:00 - 12:15
    Title
    Efficient LLM Systems: From Algorithm Design to Deployment
    Location
    Elaine and Bram Goldsmith Building for Mathematics and Computer Sciences
    Room 108 - חדר 108
    LecturerRana Shahout
    Harvard University
    Organizer
    Department of Computer Science and Applied Mathematics
    Contact
    AbstractShow full text abstract about Large Language Models (LLMs) have transformed what machines ...»
    Large Language Models (LLMs) have transformed what machines can do and how systems are designed to serve them. These models are both computationally and memory demanding, revealing the limits of traditional optimization methods that once sufficed for conventional systems. A central challenge in building LLM systems is improving system metrics while ensuring response quality.

    This talk presents approaches for reducing latency in LLM systems to support interactive applications, from scheduling algorithm design to deployment. It introduces scheduling frameworks that use lightweight predictions of request behavior to make informed decisions about prioritization and memory management across two core settings: standalone LLM inference and API-augmented LLMs that interact with external tools. Across both settings, prediction-guided scheduling delivers substantial latency reductions while remaining practical for deployment.
    Lecture

Pages