You are here

Upcoming Seminars

MondayDec 05, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Amos KormanTitle:An Algorithmic Perspective to Collective BehaviorAbstract:opens in new windowin html    pdfopens in new window
In this talk, I will present a new interdisciplinary approach that I have been developing in recent years, aiming to build a bridge between the fields of algorithm theory and collective (animal) behavior. Ideally, an algorithmic perspective on biological phenomena can provide a level of fundamental understanding that is difficult to achieve using typical computational tools employed in this area of research (e.g., differential equations or computer simulations). In turn, this fundamental understanding can provide both qualitative and quantitative predictions that can guide biological research in unconventional directions. I will demonstrate this novel approach by presenting a sequence of works on collective ant navigation, whose experimental part was done in collaboration with the Feinerman ant lab at the Weizmann Institute. In the second part of the talk, I will present a recent result (published in Science Advances 2021) regarding the search efficiency of common animal movement patterns, addressing a long-standing open problem in the area of foraging. I will conclude the talk by discussing potential avenues to employ an algorithmic perspective in biological contexts.
ThursdayDec 08, 202212:15
Vision and AILecture Hall
Speaker:Niv Cohen Title:"This is my unicorn, Fluffy": Personalizing frozen vision-language representationsAbstract:opens in new windowin html    pdfopens in new window
Large Vision & Language models pretrained on web-scale data provide representations invaluable for numerous V&L problems. However, it is unclear how they can be used for reasoning about user-specific visual concepts in unstructured language. We introduce a new learning setup called Personalized Vision & Language (PerVL) with two new benchmark datasets for retrieving and segmenting user-specific "personalized" concepts in the wild. We propose an architecture for solving PerVL that operates by extending the input vocabulary of a pretrained model with new word embeddings for the new personalized concepts. We demonstrate that our approach learns personalized visual concepts from a few examples and can effectively apply them in image retrieval and semantic segmentation using rich textual queries. (Published as an oral presentation at ECCV2022. This work was done during an internship at NVIDIA Research Tel Aviv.) Bio: Niv is a Ph.D. student at The Hebrew University of Jerusalem, advised by Dr. Yedid Hoshen. He received his BSc. in mathematics with physics, and M.Sc. in physics, both from the Technion. He's interested in computer vision and representation learning with a focus on anomaly detection and scientific data.
ThursdayDec 15, 202212:15
Vision and AIRoom 1
Speaker:Yossi GandelsmanTitle:Test-time Training with Self-SupervisionAbstract:opens in new windowin html    pdfopens in new window
Test-Time Training is a general approach for improving the performance of predictive models when training and test data come from different distributions. It adapts to a new test distribution on the fly by optimizing a model for each test input using self-supervision before making the prediction. This method improves generalization on many real-world visual benchmarks for distribution shifts. In this talk, I will present the recent progress in the test-time training paradigm. I will show how masked auto-encoding overcomes the shortcomings of previously used self-supervised tasks and improves results by a large margin. In addition, I will demonstrate how test-time training extends to videos - instead of just testing each frame in temporal order, the model is first fine-tuned on the recent past before making a prediction and only then proceeding to the next frame.