You are here

Upcoming Seminars

WednesdayJan 20, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Li Huajie Title:An infinitesimal variant of Guo-Jacquet trace formulae and its comparisonAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425

This talk is based on my thesis supervised by P.-H. Chaudouard. The conjecture of Guo-Jacquet is a promising generalization to higher dimensions of Waldspurger’s well-known theorem on the relation between toric periods and central values of automorphic L-functions for GL(2). However, we are faced with divergent integrals when applying the relative trace formula approach. In this talk, we study an infinitesimal variant of this problem. Concretely, we establish global and local trace formulae for infinitesimal symmetric spaces of Guo-Jacquet. To compare regular semi-simple terms, we present the weighted fundamental lemma and certain identities between Fourier transforms of local weighted orbital integrals

 

HTTPS://WEIZMANN.ZOOM.US/J/98304397425

ThursdayJan 21, 202118:00
Vision and Robotics Seminar
Speaker:Or LitanyTitle:Learning on Pointclouds for 3D Scene UnderstandingAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/93960308443?pwd=b1BWMkZURjZ0THMxUisrSFpMaHNXdz09

In this talk i'll be covering several works in the topic of 3D deep learning on pointclouds for scene understanding tasks.
First, I'll describe VoteNet (ICCV 2019, best paper nomination): a method for object detection from 3D pointclouds input, inspired by the classical generalized Hough voting technique. I'll then explain how we integrated image information into the voting scheme to further boost 3D detection (ImVoteNet, CVPR 2020). In the second part of my talk I'll describe recent studies focusing on reducing supervision for 3D scene understanding tasks, including PointContrast -- a self-supervised representation learning framework for 3D pointclods (ECCV 2020). Our findings in PointContrast are extremely encouraging: using a unified triplet of architecture, source dataset, and contrastive loss for pre-training, we achieve improvement over recent best results in segmentation and detection across 6 different benchmarks for indoor and outdoor, real and synthetic datasets -- demonstrating that the learned representation can generalize across domains. 

 

https://weizmann.zoom.us/j/93960308443?pwd=b1BWMkZURjZ0THMxUisrSFpMaHNXdz09

ThursdayJan 21, 202118:00
Machine Learning and Statistics Seminar
Speaker:Or Litany Title:Learning on Pointclouds for 3D Scene UnderstandingAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/93960308443?pwd=b1BWMkZURjZ0THMxUisrSFpMaHNXdz09

In this talk i'll be covering several works in the topic of 3D deep learning on pointclouds for scene understanding tasks.
First, I'll describe VoteNet (ICCV 2019, best paper nomination): a method for object detection from 3D pointclouds input, inspired by the classical generalized Hough voting technique. I'll then explain how we integrated image information into the voting scheme to further boost 3D detection (ImVoteNet, CVPR 2020). In the second part of my talk I'll describe recent studies focusing on reducing supervision for 3D scene understanding tasks, including PointContrast -- a self-supervised representation learning framework for 3D pointclods (ECCV 2020). Our findings in PointContrast are extremely encouraging: using a unified triplet of architecture, source dataset, and contrastive loss for pre-training, we achieve improvement over recent best results in segmentation and detection across 6 different benchmarks for indoor and outdoor, real and synthetic datasets -- demonstrating that the learned representation can generalize across domains.

 

 

https://weizmann.zoom.us/j/93960308443?pwd=b1BWMkZURjZ0THMxUisrSFpMaHNXdz09

WednesdayJan 27, 202114:00
Machine Learning and Statistics Seminar
Speaker:Haggai Maron Title:Deep Learning on Structured and Geometric DataAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98859805667?pwd=dCthazViNFM1Y2FFeCtIcitORUR3Zz09

Deep Learning of structured and geometric data, such as sets, graphs, and surfaces, is a prominent research direction that has received considerable attention in the last few years. Given a learning task that involves structured data, the main challenge is identifying suitable neural network architectures and understanding their theoretical and practical tradeoffs.

This talk will focus on a popular learning setup where the learning task is invariant to a group of transformations of the input data. This setup is relevant to many popular learning tasks and data types. In the first part of the talk, I will present a general framework for designing neural network architectures based on layers that respect these transformations. In particular, I will show that these layers can be implemented using parameter-sharing schemes induced by the group. In the second part of the talk, I will demonstrate the framework’s applicability by presenting novel neural network architectures for two widely used data types: graphs and sets. I will also show that these architectures have desirable theoretical properties and that they perform well in practice.

 

https://weizmann.zoom.us/j/98859805667?pwd=dCthazViNFM1Y2FFeCtIcitORUR3Zz09
 

WednesdayJan 27, 202118:30
Superalgebra Theory and Representations Seminar
Speaker:Volodymyr MazorchukTitle:Bigrassmannian permutations and Verma modulesAbstract:opens in new windowin html    pdfopens in new windowhttps://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

In this talk I will describe how bigrassmannian permutations control the socle of the cokernel of
embeddings of Verma modules for sl_n. An application of this is a description of the socle of the cokernel of homomorphisms between Verma modules for the periplective Lie superalgebra. This is based on two joint works: one with Hankyung Ko and Rafael Mrden and another one with Chih-Whi Chen.

 

https://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

Meeting ID: 881 8925 8443

Passcode: 40320

ThursdayJan 28, 202112:15
Vision and Robotics Seminar
Speaker:Shai Avidan Title:Learning to SampleAbstract:opens in new windowin html    pdfopens in new window https://weizmann.zoom.us/j/91363737177?pwd=bEdNSU9tcVRCQXhjaDRPSXIyWCswUT09

There is a growing number of tasks that work directly on point clouds. As the size of the point cloud grows, so do the computational demands of these tasks. A possible solution is to sample the point cloud first. Classic sampling approaches, such as farthest point sampling (FPS), do not consider the downstream task. A recent work showed that learning a task-specific sampling can improve results significantly. However, the proposed technique did not deal with the non-differentiability of the sampling operation and offered a workaround instead. We introduce a novel differentiable relaxation for point cloud sampling that approximates sampled points as a mixture of points in the primary input cloud. Our approximation scheme leads to consistently good results on classification and geometry reconstruction applications. We also show that the proposed sampling method can be used as a front to a point cloud registration network. This is a challenging task since sampling must be consistent across two different point clouds for a shared downstream task. In all cases, our approach outperforms existing non-learned and learned sampling alternatives.

Based on the work of: Itai Lang, Oren Dovrat, and Asaf Manor

 

 https://weizmann.zoom.us/j/91363737177?pwd=bEdNSU9tcVRCQXhjaDRPSXIyWCswUT09