You are here

Upcoming Seminars

TuesdayJan 21, 202011:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Valentin Buciumas Title:A bridge between p-adic and quantum group representations via Whittaker coinvariants.Abstract:opens in new windowin html    pdfopens in new window

Unramified principal series representations of p-adic GL(r) and its metaplectic covers are important in the theory of automorphic forms. I will present a method of relating the Whittaker coinvariants of such a representation with representations of quantum affine gl_n. This involves using a Schur-Weyl duality result due to Chari and Pressley and it allows us to compute the dimension of the Whittaker model of every irreducible smooth representation with Iwahori fixed vectors.

If time permits I will explain a conjectured version of this result for the symplectic group Sp(2r) which involves quantum symmetric pairs.

ThursdayJan 23, 202012:15
Vision and Robotics SeminarRoom 1
Speaker:Oren Dovrat Title:Learning to SampleAbstract:opens in new windowin html    pdfopens in new window

Processing large point clouds is a challenging task. Therefore, the data is often sampled to a size that can be processed more easily. The question is how to sample the data? A popular sampling technique is Farthest Point Sampling (FPS). However, FPS is agnostic to a downstream application (classification, retrieval, etc.). The underlying assumption seems to be that minimizing the farthest point distance, as done by FPS, is a good proxy to other objective functions.

We show that it is better to learn how to sample. To do that, we propose a deep network to simplify 3D point clouds. The network, termed S-NET, takes a point cloud and produces a smaller point cloud that is optimized for a particular task. The simplified point cloud is not guaranteed to be a subset of the original point cloud. Therefore, we match it to a subset of the original points in a post-processing step. We contrast our approach with FPS by experimenting on two standard data sets and show significantly better results for a variety of applications. Our code is publicly available.

ThursdayJan 23, 202013:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:James Lee Title:Metric entropies, L_1 transportation, and competitive analysisAbstract:opens in new windowin html    pdfopens in new window

The MTS problem (Borodon, Linial, and Saks 1992) is a general model for the analysis of algorithms that optimize in the presence of information arriving over time, where the state space is equipped with a metric. I will discuss a relatively new approach to this area, where both the algorithm and method of analysis are derived canonically from a choice of a "regularizer" on the probability simplex. The regularizer can be interpreted as a Riemannian structure on the simplex, and then the algorithm is simply a gradient flow.

Defining the regularizer as an appropriate "noisy" multiscale metric entropy (similar to, e.g., the Talagrand \gamma_1 functional) yields the best-known competitive ratio for every metric space. For ultrametrics (aka, "HST metrics"), this achieves the conjectured bound, but the algorithm falls short of resolving the MTS conjecture for many other spaces, including subsets of the real line.

SundayJan 26, 202011:15
Faculty SeminarRoom 155
Speaker:Meirav Zehavi Title:On Parameterized Analysis and the Disjoint Paths ProblemAbstract:opens in new windowin html    pdfopens in new window

Parameterized Anaylsis leads both to deeper understanding of intractability results and to practical solutions for many NP-hard problems. Informally speaking, Parameterized Analysis is a mathematical paradigm to answer the following fundamental question: What makes an NP-hard problem hard? Specifically, how do different parameters (being formal quantifications of structure) of an NP-hard problem relate to its inherent difficulty? Can we exploit these relations algorithmically, and to which extent? Over the past three decades, Parameterized Analysis has grown to be a mature field of outstandingly broad scope with significant impact from both theoretical and practical perspectives on computation.

In this talk, I will first give a brief introduction of the field of Parameterized Analysis. Then, I will discuss some recent work in this field, where I mainly address (i) problems at the core of this field, rooted at Graph Minors and Graph Modification Problems, and (ii) applications of tools developed in (i) in particular, and of parameterized analysis in general, to Computational Geometry and Computational Social Choice. Additionally, I will zoom into a specific result, namely, the first single-exponential time parameterized algorithm for the Disjoint Paths problem on planar graphs. An efficient algorithm for the Disjoint Paths problem in general, and on "almost planar" graphs in particular, is a critical part in the quest for the establishment of an Efficient Graph Minors Theory. As the Graph Minors Theory is the origin of Parameterized Analysis and ubiquitous in the design of parameterized algorithms, making this theory efficient is a holy grail in the field.

MondayJan 27, 202011:15
Computer Science SeminarRoom 155
Speaker:Yuval DaganTitle:Interaction is necessary for distributed learning with privacy or communication constraintsAbstract:opens in new windowin html    pdfopens in new windowSPECIAL FOUNDATIONS OF COMPUTER SCIENCE SEMINAR

Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest. Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed in \citep{SmithTU17,DanielyF18}. Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive \emph{statistical queries}.

Link: https://arxiv.org/abs/1911.04014

 

 

ThursdayJan 30, 202012:15
Vision and Robotics SeminarRoom 1
Speaker:Oshri Halimi Title:Handling the Unknown with Non-Rigid Geometric InvariantsAbstract:opens in new windowin html    pdfopens in new window

My goal is to demonstrate how geometric priors can replace the requirement for annotated 3D data. In this context I'll present two of my works. First, I will present a deep learning framework that estimates dense correspondence between articulated 3D shapes without using any ground truth labeling which I presented as an oral presentation at CVPR 2019: "Unsupervised Learning of Dense Shape Correspondence". We demonstrated that our method is applicable to full and partial 3D models as well as to realistic scans. The problem of incomplete 3D data can be encountered also in many other different scenarios. One such interesting problem of great practical importance is shape completion, to which I'll dedicate the second part of my lecture.
It is common to encounter situations where there is a considerable distinction between the scanned 3D model and the final rendered one. The distinction can be attributed to occlusions or directional view of the target, when the scanning device is localized in space. I will demonstrate that geometric priors can guide learning algorithms in the task of 3D model completion from partial observation. To this end, I'll present our recent work: "The Whole is Greater than the Sum of its Non-Rigid Parts". This famous declaration of Aristotle was adopted to explain human perception by the Gestalt psychology school of thought in the twentieth century. Here, we claim that observing part of an object which was previously acquired as a whole, one could deal with both partial matching and shape completion in a holistic manner. More specifically, given the geometry of a full, articulated object in a given pose, as well as a partial scan of the same object in a different pose, we address the problem of matching the part to the whole while simultaneously reconstructing the new pose from its partial observation. Our approach is data-driven, and takes the form of a Siamese autoencoder without the requirement of a consistent vertex labeling at inference time; as such, it can be used on unorganized point clouds as well as on triangle meshes. We demonstrate the practical effectiveness of our model in the applications of single-view deformable shape completion and dense shape correspondence, both on synthetic and real-world geometric data, where we outperform prior work on these tasks by a large margin.

ThursdayMar 26, 202012:15
Vision and Robotics SeminarRoom 1
Speaker:Yuval Atzmon Title:TBAAbstract:opens in new windowin html    pdfopens in new window