You are here

Upcoming Seminars

WednesdayJan 27, 202114:00
Machine Learning and Statistics Seminar
Speaker:Haggai Maron Title:Deep Learning on Structured and Geometric DataAbstract:opens in new windowin html    pdfopens in new window

Deep Learning of structured and geometric data, such as sets, graphs, and surfaces, is a prominent research direction that has received considerable attention in the last few years. Given a learning task that involves structured data, the main challenge is identifying suitable neural network architectures and understanding their theoretical and practical tradeoffs.

This talk will focus on a popular learning setup where the learning task is invariant to a group of transformations of the input data. This setup is relevant to many popular learning tasks and data types. In the first part of the talk, I will present a general framework for designing neural network architectures based on layers that respect these transformations. In particular, I will show that these layers can be implemented using parameter-sharing schemes induced by the group. In the second part of the talk, I will demonstrate the framework’s applicability by presenting novel neural network architectures for two widely used data types: graphs and sets. I will also show that these architectures have desirable theoretical properties and that they perform well in practice.

WednesdayJan 27, 202118:30
Superalgebra Theory and Representations Seminar
Speaker:Volodymyr MazorchukTitle:Bigrassmannian permutations and Verma modulesAbstract:opens in new windowin html    pdfopens in new window

In this talk I will describe how bigrassmannian permutations control the socle of the cokernel of
embeddings of Verma modules for sl_n. An application of this is a description of the socle of the cokernel of homomorphisms between Verma modules for the periplective Lie superalgebra. This is based on two joint works: one with Hankyung Ko and Rafael Mrden and another one with Chih-Whi Chen.

Meeting ID: 881 8925 8443

Passcode: 40320

ThursdayJan 28, 202112:15
Vision and Robotics Seminar
Speaker:Shai Avidan Title:Learning to SampleAbstract:opens in new windowin html    pdfopens in new window

There is a growing number of tasks that work directly on point clouds. As the size of the point cloud grows, so do the computational demands of these tasks. A possible solution is to sample the point cloud first. Classic sampling approaches, such as farthest point sampling (FPS), do not consider the downstream task. A recent work showed that learning a task-specific sampling can improve results significantly. However, the proposed technique did not deal with the non-differentiability of the sampling operation and offered a workaround instead. We introduce a novel differentiable relaxation for point cloud sampling that approximates sampled points as a mixture of points in the primary input cloud. Our approximation scheme leads to consistently good results on classification and geometry reconstruction applications. We also show that the proposed sampling method can be used as a front to a point cloud registration network. This is a challenging task since sampling must be consistent across two different point clouds for a shared downstream task. In all cases, our approach outperforms existing non-learned and learned sampling alternatives.

Based on the work of: Itai Lang, Oren Dovrat, and Asaf Manor