You are here

Previous Seminars

MondayOct 25, 202116:00
Geometry and Topology Seminar & Mathematical Analysis and Applications Seminar
Speaker:Philipp HabeggerTitle:Uniformity for the Number of Rational Points on a CurveAbstract:opens in new windowin html    pdfopens in new windowZoom Meeting: https://weizmann.zoom.us/j/9183356684?pwd=QlN4T1VtcXc4QXdXNEYxUlQ4dmR5dz09
By Faltings's Theorem, formerly known as the Mordell Conjecture, a smooth projective curve of genus at least 2 that is defined over a number field K has at most finitely many K-rational points. Votja later gave a second proof. Many authors, including Bombieri, de Diego, Parshin, Remond, Vojta, proved upper bounds for the number of K-rational points. I will discuss joint work with Vesselin Dimitrov and Ziyang Gao where we prove that the number of points on the curve is bounded from above as a function of K, the genus, and the rank of the Mordell-Weil group of the curve's Jacobian. We follow Vojta's approach to the Mordell Conjecture and answer a question of Mazur. I will explain the new feature: an inequality for the Neron-Tate height in a family of abelian varieties. It allows us to bound from above the number of points when the modular height of the curve is sufficiently large. This suffices for Mazur's Question.
WednesdayOct 20, 202119:15
Superalgebra Theory and Representations Seminar
Speaker:Alex Sherman (BGU) and Crystal Hoyt (Bar Ilan University)Title:Minicourse on Duflo-Serganova functorsAbstract:opens in new windowin html    pdfopens in new windowZoom: https://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

***The mini-course will consist of 3 talks on Wednesdays (Oct. 20th, Oct. 27th, Nov. 3rd), at 19:15 Israel time. 

WednesdayOct 13, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Gerald Schwarz Title:Criteria for the zero fiber of a moment map to have rational singularities and applicationsAbstract:opens in new windowin html    pdfopens in new windowZoom at: https://weizmann.zoom.us/j/98304397425

WednesdayOct 06, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Dmitry Gourevitch Title:Finite multiplicities beyond spherical pairsAbstract:opens in new windowin html    pdfopens in new windowZoom at: https://weizmann.zoom.us/j/98304397425
Let G be a real reductive algebraic group, and let H be an algebraic subgroup of G. Itis known that the action of G on the space of functions on G/H is "tame" if this space is spherical. In particular, the multiplicities of the space of Schwartz functions on G/H are finite in this case. I will talk about a recent joint work with A. Aizenbud in which we formulate and analyze a generalization of sphericity that implies finite multiplicities in the Schwartz space of G/H for small enough irreducible smooth representations of G. In more detail, for every G-space X, and every closed G-invariant subset S of the nilpotent cone of the Lie algebra of G, we define when X is S-spherical, by means of a geometric condition involving dimensions of fibers of the moment map. We then show that if X is S-spherical, then every representation with annihilator variety lying in S has (at most) finite multiplicities in the Schwartz space of X. For the case when S is the closure of the Richardson orbit of a parabolic subgroup P of G, we show that the condition is equivalent to P having finitely many orbits on X. We give applications of our results to branching problems. Our main tool in bounding the multiplicity is the theory of holonomic D-modules. After formulating our main results, I will briefly recall the necessary aspects of this theory and sketch our proofs. The talk is based on arXiv:2109.00204.
MondayOct 04, 202111:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Nofar CarmeliTitle:The Fine-Grained Complexity of Answering Database QueriesAbstract:opens in new windowin html    pdfopens in new windowZoom at: https://weizmann.zoom.us/j/92495224692?pwd=VGxEVjd5V01EeSs0QW1PZmxkdlFQQT09
We wish to identify the queries that can be solved with close to optimal time guarantees over relational databases. Computing all query answers requires at least linear time before the first answer (to read the input and determine the answer's existence), and then we must allow enough time to print all answers (which may be many). Thus, we aspire to achieve linear preprocessing time and constant or logarithmic time per answer. A known dichotomy classifies Conjunctive Queries into those that admit such enumeration and those that do not: the main difficulty of query answering is joining tables, which can be done efficiently if and only if the join query is acyclic. However, the join query usually does not appear in a vacuum; for example, it may be part of a larger query, or it may be applied to a database with dependencies. We show how to use this context for more efficient computation and study how the complexity changes in these settings. Next, we aspire for an even more powerful solution for query answering: a structure that simulates an array containing the query answers. Such a structure can be used for example to enumerate all answers in a statistically meaningful order or to efficiently compute a boxplot of query answers. We call this simulation random access and study for which queries random access can be achieved with near-optimal guarantees. Our results are accompanied by conditional lower bounds showing that our algorithms can be applied to all tractable queries in some cases. Among our results, we show that a union of tractable Conjunctive Queries may be intractable w.r.t. random access, but a union of intractable Conjunctive Queries may be tractable w.r.t. enumeration.
ThursdayAug 19, 202111:00
Geometric Functional Analysis and Probability Seminar
Speaker:Sasha SodinTitle:Lower bounds on the eigenfunctions of random Schroedinger operators in a stripAbstract:opens in new windowin html    pdfopens in new windowZoom Meeting: https://weizmann.zoom.us/j/5117633893?pwd=TjZHajlNQ1RScldPbEZ4Ny80TEUyQT09
It is known that the eigenfunctions of a random Schroedinger operator in a strip (the direct product of the integer line and a finite set) decay exponentially. In some regimes, the same is true in higher dimensions. It is however not clear whether the eigenfunctions have an exact rate of exponential decay. In the strip, it is natural to expect that the rate should be given by the slowest Lyapunov exponent, however, only the upper bound has been previously established. We shall discuss some recent progress on this problem, and its connection to a question, perhaps interesting in its own right, from the theory of random matrix products. Based on joint work with Ilya Goldsheid.
SundayAug 15, 202111:00
Vision and AI
Speaker:Yoni KastenTitle:Algebraic Characterization of Relational Camera Pose Measurements in Multiple ImagesAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98972228231?pwd=Q1BjWjVtRWFyUXJoVDBHK2g5T2JIdz09 *** Please note the Unusual Time ***

Structure from Motion (SfM) deals with recovering camera parameters and 3D scene structure from collections of 2D images. SfM is commonly solved by minimizing the non-covex, bundle adjustment objective, which generally requires sophisticated initialization. In this talk I will present two approaches to SfM: the first approach involves averaging of essential or fundamental matrices (also called bifocal tensors). Since the bifocal tensors are computed independently from image pairs they are generally inconsistent with any set of n cameras.  We provide a complete algebraic characterization of the manifold of bifocal tensors for n cameras and present an optimization framework to project measured bifocal tensors onto the manifold. Our second approach is an online approach: given n-1 images, I_1,...,I_{n-1}, whose camera matrices have already been recovered, we seek to recover the camera matrix associated with an image I_n . We present a novel solution to the six-point online algorithm to recover the exterior parameters associated with I_n. Our algorithm uses just six corresponding pairs of 2D points, extracted each from I_n and from any of the preceding n-1 images, allowing the recovery of the full six degrees of freedom of the n'th camera, and unlike common methods, does not require tracking feature points in three or more images. We present experiments that demonstrate the utility of both our approaches. If time permits, I will briefly present additional recent work for solving SfM using deep neural models.

WednesdayJul 28, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Siddhartha Sahi Title:A Stone-von Neumann equivalence for smooth representations and its application to degenerate Whittaker modelsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
The Stone von-Neuman theorem relates the irreducible unitary representations of the Heisenberg group $H_n$ to non-trivial unitary characters of its center $Z$, and plays a crucial role in the construction of the oscillator representation for the metaplectic group. We give two extensions of this result to non-unitary and non-irreducible representations, thereby obtaining an equivalence of categories between certain representations of $Z$ and those of $H_n$. Our first result is an algebraic equivalence, which can be regarded as a generalization of Kashiwara's lemma from the theory of $D$-modules. Our second result is a smooth equivalence, which involves the fundamental ideas of Ducloux on differentiable representations and smooth imprimitivity systems for Nash groups. We show how to extend the oscillator representation to the smooth setting and give an application to degenerate Whittaker models for representations of reductive groups. This is joint work with Raul Gomez and Dmitry Gourevitch.
WednesdayJul 14, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Dihua JiangTitle:Certain Poisson Summation Formulae on GL(1) and Langlands Automorphic L-functionsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
In Tate's famous thesis, harmonic analysis on GL(1) was used to establish the global functional equation for Hecke L-functions. In this talk, I will discuss possibilities to define more general Fourier operators and their associated Poisson summation formulae on GL(1), which are expected to be responsible for the global functional equation of general Langlands automorphic L-functions. This is a progress report of my joint work with Zhilin Luo.
WednesdayJun 30, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Yotam Hendel Title:A number theoretic characterization of (FRS) morphisms: uniform estimates over finite rings of the form Z/p^kZ.Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
Let f:X->Y be a morphism between smooth algebraic varieties defined over the integers. We show its fibers satisfy an extension of the Lang-Weil bounds with respect to finite rings of the form Z/p^kZ uniformly in p, k and in the base point y if and only if f is flat and its fibers have rational singularities, a property abbreviated as (FRS). This characterization of (FRS) morphisms serves as a joint refinement of two results of Aizenbud and Avni; namely a similar characterization in the case of a single variety, and a characterization of (FRS) morphisms which is non-uniform in the prime p. Aizenbud and Avni's argument in the case of a variety breaks in the relative case due to bad behaviour of resolution of singularities in families with respect to taking points over Z and Z/p^kZ. To bypass this, we prove a key model theoretic statement on a certain satisfactory class of positive functions (formally non-negative motivic functions), which allows us to efficiently approximate suprema of such functions. Based on arXiv:2103.00282, joint with Raf Cluckers and Itay Glazer.
WednesdayJun 30, 202111:00
Superalgebra Theory and Representations SeminarRoom 1
Speaker:Prof. Vera SerganovaTitle:Defect subgroups for algebraic supergroups and super grassmanniansAbstract:opens in new windowin html    pdfopens in new windowhttps://www.math.bgu.ac.il/~entova/STARS.html
MondayJun 28, 202111:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Keren Censor-HillelTitle:Distributed Subgraph Finding: Progress and ChallengesAbstract:opens in new windowin html    pdfopens in new window
This talk surveys the exciting recent progress made in understanding the complexity of distributed subgraph finding problems. I will discuss techniques and highlight some intriguing open problems.
MondayJun 21, 202111:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Noga Ron-ZewiTitle:LDPC Codes Achieve List Decoding CapacityAbstract:opens in new windowin html    pdfopens in new window
We show that random Low-Density Parity Check (LDPC) codes achieve list decoding capacity with high probability. These are the first graph-based codes shown to have this property. This result follows from two other more general results that may be of independent interest: 1. A simple characterization of the threshold for when ‘local’ combinatorial properties are likely to be satisfied by a random subspace over a finite field. 2. Any ‘local’ property that is satisfied with high probability by a random linear code is also satisfied with high probability by a random LDPC code. Based on joint work with Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters.
ThursdayJun 17, 202112:15
Vision and AI
Speaker:Hila LeviTitle:Combining bottom-up and top-down computations for image interpretationAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/97396405783?pwd=MUVWaElqWjVVSmtsQmU5MG9RMkZaQT09

Visual scene understanding has traditionally focused on identifying objects in images learning to predict their presence and spatial extent. However, understanding a visual scene often goes beyond recognizing individual objects. In my thesis (guided by Prof. Ullman), I mainly focused on developing task-dependent network, that uses processing instructions to guide the functionality of a shared network via an additional task input. In addition, I also studied strategies for incorporating relational information into recognition pipelines to efficiently extract structures of interest from the scene. In the scope of high level scene understanding, which might be dominated by recognizing a rather small number of objects and relations, the above task-dependent scheme naturally allows goal-directed scene interpretation by either a single step or by a sequential execution with a series of different TD instructions. It simplifies the use of referring relations, grounds requested visual concepts back into the image-plane and improves combinatorial generalization, essential for AI systems, by using structured representations and computations. In the scope of multi-task learning the above scheme offers an alternative to the popular multi-branched architecture, which simultaneously execute all tasks using task-specific branches on top of a shared backbone, challenges capacity limitations, increases task selectivity, allows scalability and further tasks extensions. Results will be shown in various applications: object detection, visual grounding, properties classification, human-object interactions and general scene interpretation. 
 Works included:
1.       H. Levi and S. Ullman.  Efficient coarse-to-fine non-local module for the detection of small objects. BMVC, 2019. https://arxiv.org/abs/1811.12152
2.       H. Levi and S. Ullman.  Multi-task learning by a top-down control network. ICIP 2021. https://arxiv.org/abs/2002.03335
3.       S. Ullman et al. Image interpretation by iterative bottom-up top-down processing. http://arxiv.org/abs/2105.05592
4.       A. Arbelle et al. Detector-Free Weakly Supervised Grounding by Separation. https://arxiv.org/abs/2104.09829. Submitted to ICCV.
5.       Ongoing work – Human Object Interactions

 

ThursdayJun 17, 202112:15
Vision and AI
Speaker:Hila LeviTitle:Combining bottom-up and top-down computations for image interpretationAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/97396405783?pwd=MUVWaElqWjVVSmtsQmU5MG9RMkZaQT09
Visual scene understanding has traditionally focused on identifying objects in images – learning to predict their presence and spatial extent. However, understanding a visual scene often goes beyond recognizing individual objects. In my thesis (guided by Prof. Ullman), I mainly focused on developing `task-dependent network’, that uses processing instructions to guide the functionality of a shared network via an additional task input. In addition, I also studied strategies for incorporating relational information into recognition pipelines to efficiently extract structures of interest from the scene. In the scope of high level scene understanding, which might be dominated by recognizing a rather small number of objects and relations, the above task-dependent scheme naturally allows goal-directed scene interpretation by either a single step or by a sequential execution with a series of different TD instructions. It simplifies the use of referring relations, grounds requested visual concepts back into the image-plane and improves combinatorial generalization, essential for AI systems, by using structured representations and computations. In the scope of ‘multi-task learning’ the above scheme offers an alternative to the popular ‘multi-branched architecture’, which simultaneously execute all tasks using task-specific branches on top of a shared backbone, challenges capacity limitations, increases task selectivity, allows scalability and further tasks extensions. Results will be shown in various applications: object detection, visual grounding, properties classification, human-object interactions and general scene interpretation. Works included: 1. H. Levi and S. Ullman. Efficient coarse-to-fine non-local module for the detection of small objects. BMVC, 2019. https://arxiv.org/abs/1811.12152 2. H. Levi and S. Ullman. Multi-task learning by a top-down control network. ICIP 2021. https://arxiv.org/abs/2002.03335 3. S. Ullman et al. Image interpretation by iterative bottom-up top-down processing. http://arxiv.org/abs/2105.05592 4. A. Arbelle et al. Detector-Free Weakly Supervised Grounding by Separation. https://arxiv.org/abs/2104.09829. Submitted to ICCV. 5. Ongoing work – Human Object Interactions
ThursdayJun 17, 202112:15
Vision and AI
Speaker:Hila LeviTitle:Combining bottom-up and top-down computations for image interpretationAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/97396405783?pwd=MUVWaElqWjVVSmtsQmU5MG9RMkZaQT09

Visual scene understanding has traditionally focused on identifying objects in images – learning to predict their presence and spatial extent. However, understanding a visual scene often goes beyond recognizing individual objects. In my thesis (guided by Prof. Ullman), I mainly focused on developing `task-dependent network’, that uses processing instructions to guide the functionality of a shared network via an additional task input. In addition, I also studied strategies for incorporating relational information into recognition pipelines to efficiently extract structures of interest from the scene. In the scope of high level scene understanding, which might be dominated by recognizing a rather small number of objects and relations, the above task-dependent scheme naturally allows goal-directed scene interpretation by either a single step or by a sequential execution with a series of different TD instructions. It simplifies the use of referring relations, grounds requested visual concepts back into the image-plane and improves combinatorial generalization, essential for AI systems, by using structured representations and computations. In the scope of ‘multi-task learning’ the above scheme offers an alternative to the popular ‘multi-branched architecture’, which simultaneously execute all tasks using task-specific branches on top of a shared backbone, challenges capacity limitations, increases task selectivity, allows scalability and further tasks extensions. Results will be shown in various applications: object detection, visual grounding, properties classification, human-object interactions and general scene interpretation.

Works included:

1. H. Levi and S. Ullman. Efficient coarse-to-fine non-local module for the detection of small objects. BMVC, 2019. https://arxiv.org/abs/1811.12152

2. H. Levi and S. Ullman. Multi-task learning by a top-down control network. ICIP 2021. https://arxiv.org/abs/2002.03335

3. S. Ullman et al. Image interpretation by iterative bottom-up top-down processing. http://arxiv.org/abs/2105.05592

4. A. Arbelle et al. Detector-Free Weakly Supervised Grounding by Separation. https://arxiv.org/abs/2104.09829. Submitted to ICCV.

5. Ongoing work – Human Object Interactions

WednesdayJun 16, 202116:30
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Dor Mezer Title:Multiplicity one theorems over positive characteristicAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
We will discuss the recent proof of the strong Gelfand property over local fields of positive odd characteristic for the Gan-Gross-Prasad pairs. These pairs of groups include (GL(n),GL(n+1)),(O (n),O(n+1)), (U(n),U(n+1)), (SO(n),SO(n+1)), as well as Fourier-Jacobi pairs such as the symplectic group inside its semi-direct product with the corresponding Heisenberg group. The strong Gelfand property for these pairs has important consequences in the theory of automorphic representations, showing the uniqueness of Bessel models, of Rankin-Selberg models, and of Fourier-Jacobi models. These results were proven over characteristic 0 local fields (both archimedean and p-adic) 10-14 years ago, and extensively used since then. As in the characteristic zero case, we will use the method of Gelfand and Kazhdan, which reduces the theorems to statements on invariant distributions.
MondayJun 14, 202111:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Shay MoranTitle:What Is The Sample Complexity of Differentially Private Learning?Abstract:opens in new windowin html    pdfopens in new window
The increase in machine learning applications which involve private and personal data highlights the need for algorithms that handle the data *responsibly*. While this need has been successfully addressed by the field of differentially private machine learning, the cost of privacy remains poorly understood: How much data is needed for differentially private learning? How much more data does private learning require compared to learning without privacy constraints? We will survey some of the recent progress towards answering these questions in the distribution-free PAC model, including the Littlestone-dimension-based *qualitative* characterization and the relationship with online learning. If time allows, we will also discuss this question in more general (distribution- and data-dependent) learning models. Based on joint works with Noga Alon, Mark Bun, Roi Livni, and Maryanthe Malliaris
ThursdayJun 10, 202112:15
Vision and AI
Speaker:Niv GranotTitle:Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image Generative ModelsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/93324053107?pwd=VEtXUjR3Nk04VzgzZ202SHYrRTRidz09
Single image generative models perform synthesis and manipulation tasks by capturing the distribution of patches within a single image. The classical (pre Deep Learning) prevailing approaches for these tasks are based on an optimization process that maximizes patch similarity between the input and generated output. Recently, however, Single Image GANs were introduced both as a superior solution for such manipulation tasks, but also for remarkable novel generative tasks. Despite their impressiveness, single image GANs require long training time (usually hours) for each image and each task. They often suffer from artifacts and are prone to optimization issues such as mode collapse. We show that all of these tasks can be performed without any training, within several seconds, in a unified, surprisingly simple framework. The "good-old" patch-based methods are revisited and casted into a novel optimization-free framework. This allows generating random novel images better and much faster than GANs. We further demonstrate a wide range of applications, such as image editing and reshuffling, retargeting to different sizes, structural analogies, image collage and a newly introduced task of conditional inpainting. Not only is our method faster (×103-×104 than a GAN), it produces superior results (confirmed by quantitative and qualitative evaluation), less artifacts and more realistic global structure than any of the previous approaches (whether GAN-based or classical patch-based).
WednesdayJun 09, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Marie-France Vignéras Title:Representations of a reductive $p$-adic group $G$ over a field $C$.Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
When $C$ is algebraically closed of characteristic different from $p$, for many groups $G$, a list of pairs $(J,\lambda)$, where $\lambda$ is a smooth $C$-representation of a compact modulo centre subgroup $J$ of $G$, has been produced such that any irreducible cuspidal $C$-representation of $G$ has the form $ ind_J^G\lambda$ , for a pair $(J, \lambda)$ unique up to conjugation. With Guy Henniart, we produced similar lists when $C$ is no longer assumed algebraically closed. Our other main result concerns supercuspidality. The notion of supercuspidality makes sense for the irreducible cuspidal $C$-representations of $G$, and also for the representations $\lambda$. In most cases we proved that $\ind^G_J\lambda$ is supercuspidal if and only if $\lambda$ is supercuspidal.
ThursdayJun 03, 202112:15
Vision and AI
Speaker:Zongze WuTitle:StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/99824226974?pwd=WlZXL2ZrSWdaR05qaTVQQ1drS3JQdz09
We explore and analyze the latent style space of StyleGAN2, a state-of-the-art architecture for image generation, using models pretrained on several different datasets. We first show that StyleSpace, the space of channel-wise style parameters, is significantly more disentangled than the other intermediate latent spaces explored by previous works. Next, we describe a method for discovering a large collection of style channels, each of which is shown to control a distinct visual attribute in a highly localized and disentangled manner. Third, we propose a simple method for identifying style channels that control a specific attribute, using a pretrained classifier or a small number of example images. Manipulation of visual attributes via these StyleSpace controls is shown to be better disentangled than via those proposed in previous works. To show this, we make use of a newly proposed Attribute Dependency metric. Finally, we demonstrate the applicability of StyleSpace controls to the manipulation of real images. Our findings pave the way to semantically meaningful and well-disentangled image manipulations via simple and intuitive interfaces. This is a joint work with Dani Lischinski and Eli Shechtman. paper(CVPR 2021 oral): https://arxiv.org/abs/2011.12799 code: https://github.com/betterze/StyleSpace video: https://www.youtube.com/watch?v=U7qRotRGr1w
WednesdayJun 02, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Dani SzpruchTitle:On the local coefficients matrix for covering groups and the restriction problemAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425 Venue change: the seminar today will be by Zoom only!
The local coefficients matrixis an analog of Shahidi local coefficients defined for covering groups in asetting where uniqueness of Whittaker model fails. After introducing thisobject we shall discuss in some detail our results for coverings of SL(2) andGL(2) and point out a difference between the results for these two groups. Thenwe shall show how this difference is explained by the nature of the restrictionof genuine representations as well as by the subtle relations among certainarithmetic factors. Finally, as time permits, we shall discuss someimplications of the restriction mentioned above. This is a joint project withFan Gao and Freydoon Shahidi.
MondayMay 31, 202111:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Iftach HaitnerTitle:A Tight Lower Bound on Adaptively Secure Full-Information Coin FlipAbstract:opens in new windowin html    pdfopens in new window
In a distributed coin-flipping protocol, Blum [ACM Transactions on Computer Systems '83], the parties try to output a common (close to) uniform bit, even when some adversarially chosen parties try to bias the common output. In an adaptively secure full-information coin flip, Ben-Or and Linial [FOCS '85], the parties communicate over a broadcast channel and a computationally unbounded adversary can choose which parties to corrupt during the protocol execution. Ben-Or and Linial proved that the n-party majority protocol is resilient to O(\sqrt{n}) corruptions (ignoring poly-logarithmic factors), and conjectured this is a tight upper bound for any n-party protocol (of any round complexity). Their conjecture was proved to be correct for limited variant of single-turn (each party sends a single message) single-bit (a message is one bit) protocols, Lichtenstein, Linial, and Saks [Combinatorica '89], symmetric protocols Goldwasser, Kalai, and Park [ICALP '15], and recently for (arbitrary message length) single-turn protocols Kalai, Komargodski, and Raz [DISC '18]. Yet, the question for many-turn (even single-bit) protocols was left completely open. In this work we close the above gap, proving that no n-party protocol (of any round complexity) is resilient to \omega(\sqrt{n}) (adaptive) corruptions. That is, up to polylog factors, the single-bit, single-message majority protocol is the optimal protocol against adaptive corruptions.
MondayMay 24, 202111:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Oded GoldreichTitle:Non-adaptive vs Adaptive Queries in the Dense Graph Testing Model by Oded Goldreich and Avi WigdersonAbstract:opens in new windowin html    pdfopens in new window
We study the relation between the query complexity of adaptive and non-adaptive testers in the dense graph model. It has been known for a couple of decades that the query complexity of non-adaptive testers is at most quadratic in the query complexity of adaptive testers. We show that this general result is essentially tight; that is, there exist graph properties for which any non-adaptive tester must have query complexity that is almost quadratic in the query complexity of the best general (i.e., adaptive) tester. More generally, for every $q:\N\to\N$ such that $q(n)\leq{\sqrt n}$ and constant $c\in[1,2]$, we show a graph property that is testable in $\Theta(q(n))$ queries, but its non-adaptive query complexity is $\Theta(q(n)^c)$, omitting $\poly(\log n)$ factors and ignoring the effect of the proximity parameter $\epsilon$. Furthermore, the upper bounds hold for one-sided error testers, and are at most quadratic in $1/\epsilon$. These results are obtained through the use of general reductions that transport properties of ordered structured (like bit strings) to those of unordered structures (like unlabeled graphs). The main features of these reductions are query-efficiency and preservation of distance to the properties. This method was initiated in our prior work (ECCC, TR20-149), and we significantly extend it here.
WednesdayMay 19, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Nadav Gropper Title:Anabelian construction of phi,Gamma modules Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
Anabelian geometry asks how much can we say about a variety from its fundamental group. In 1997 Shinichi Mochizuki, using p adic Hodge theory, proved a fundamental anabelian result for the case of p-adic fields. In my talk I will discuss representation theoretical data which can be reconstructed from an absolute Galois group and also types of representations that cannot be constructed solely from it. I will also sketch how these types of ideas can potentially give many new results about p-adic Galois representation
ThursdayMay 13, 202112:15
Vision and AI
Speaker:Assaf ShocherTitle:Deep Internal LearningAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/my/assaf.shocher?pwd=K25wMVFLMzhYM2cxZXJsNTdyUGozUT09
Deep Learning has always been divided into two phases: Training and Inference. The common practice for Deep Learning is training big networks on huge datasets. While very successful, such networks are only applicable to the type of data they were trained for and require huge amounts of annotated data, which in many cases are not available. In my thesis (guided by Prof. Irani), I invented ``Deep Internal Learning''. Instead of learning to generally solve a task for all inputs, we perform ``ad hoc'' learning for specific input. We train an image-specific network, we do it at test-time and on the test-input only, in an unsupervised manner (no label or ground-truth). In this regime, training is actually a part of the inference, no additional data or prior training is taking place. I will demonstrate how we applied this framework for various challenges: Super-Resolution, Segmentation, Dehazing, Transparency-Separation, Watermark removal. I will also show how this approach can be incorporated to Generative Adversarial Networks by training a GAN on a single image. If time permits I will also cover some partially related works. Links to papers: http://www.wisdom.weizmann.ac.il/~vision/zssr http://www.wisdom.weizmann.ac.il/~vision/DoubleDIP http://www.wisdom.weizmann.ac.il/~vision/ingan http://www.wisdom.weizmann.ac.il/~vision/kernelgan https://semantic-pyramid.github.io/ https://arxiv.org/abs/2006.11120 https://arxiv.org/abs/2103.15545
WednesdayMay 12, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Andrew Hendrickson Title:The Burger-Sarnak Method and Operations on the Unitary Duals of Classical GroupsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
The Burger-Sarnak method shows that the restriction of an automorphic representation of a reductive group to a reductive subgroup has automorphic support. Clozel has conjectured a qualitative refinement of this result, which was first verified and quantified in the GLn case by Venkatesh. In this talk I will describe my thesis which extended this result to classical groups.
WednesdayMay 12, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Andrew Hendrickson Title:The Burger-Sarnak Method and Operations on the Unitary Duals of Classical GroupsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
The Burger-Sarnak method shows that the restriction of an automorphic representation of a reductive group to a reductive subgroup has automorphic support. Clozel has conjectured a qualitative refinement of this result, which was first verified and quantified in the GLn case by Venkatesh. In this talk I will describe my thesis which extended this result to classical groups.
ThursdayMay 06, 202112:15
Vision and AI
Speaker:Nathan SrebroTitle:What, How and When can we Learn Adversarially Robustly?Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/97720109091?pwd=LzdheCt3VWpXRFFOSGJaa3lkc0JyZz09 A Joint Computer Vision & Machine Learning seminar
In this talk we will discuss the problem of learning an adversarially robust predictor from clean training data. That is, learning a predictor that performs well not only on future test instances, but also when these instances are corrupted adversarially. There has been much empirical interest in this question, and in this talk we will take a theoretical perspective and see how it leads to practically relevant insights, including: the need to depart from an empirical (robust) risk minimization approach, and thinking of what kind of accesses and reductions can allow learning. Joint work with Omar Montasser and Steve Hanneke.
WednesdayMay 05, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Anne-Marie Aubert Title:Tempered dual and crossed products for real and p-adic reductive groupsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
Let G be a real or a p-adic connected reductive group. We will consider the connected components of the tempered dual of G. They are labelled by the G-conjugacy classes of pairs formed by a Levi subgroup M of G and the orbit of a discrete series representation of M under the group of unitary unramified characters of M. For real groups, Wassermann proved in 1987, by noncommutative-geometric methods, that each connected component has a simple geometric structure which encodes the reducibility of the corresponding parabolically induced representations. We will explain how one can recover his result. For p-adic groups, each connected component comes with a compact torus equipped with a finite group action, and an analogous result, that we will describe, holds true under a certain geometric assumption on the structure of stabilizers for that action. In the case when G is a quasi-split symplectic, special orthogonal or unitary group, it is possible to explicitly determine the connected components for which the geometric assumption is satisfied. It is a joint work with Alexandre Afgoustidis.
ThursdayApr 29, 202112:15
Vision and AI
Speaker:Raja GiryesTitle:Robustifying neural networks Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98856110491?pwd=Sy9vRldXaFpqNFFnRzI1eXJLY1Fadz09
In this talk I will survey several techniques to make neural networks more robust. While neural networks achieve groundbreaking results in many applications, they depend strongly on the availability of good training data and the assumption that the data in the test time will resemble the one at train time. In this talk, we will survey different techniques that we developed for improving the network robustness and/or adapting it to the data at hand.
WednesdayApr 28, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Dan CiubotaruTitle:Isolated unitary unramified representations and the Dirac inequalityAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
I will present several applications of the Dirac inequality to the determination of unitary representations of p-adic groups and associated "spectral gaps". The method works particularly well in order to attach irreducible unitary representations to the large nilpotent orbits (e.g., regular, subregular) in the Langlands dual complex Lie algebra. These results can be viewed as a p-adic analogue of Salamanca-Riba's classification of irreducible unitary (g,K)-modules with strongly regular infinitesimal character.
ThursdayApr 22, 202112:15
Vision and AI
Speaker:Adi ShamirTitle:A New Theory of Adversarial Examples in Machine LearningAbstract:opens in new windowin html    pdfopens in new windowSchmidt Hall and via Zoom: https://weizmann.zoom.us/j/97445179862?pwd=VGVTdzNVU2VnTU1USmdtemNiUEtNUT09 ** Note that attendance is limited to 50 people under the 'green and purple badges' ** Covid-19 instructions for Schmidt hall: 1. Only vaccinated/recovered people can enter. 2. At least one empty chair is required between each two participants. 3. Sitting is not allowed in the first two rows. 4. All participants should wear a mask (also during the lecture).
The extreme fragility of deep neural networks when presented with tiny perturbations in their inputs was independently discovered by several research groups in 2013. Due to their mysterious properties and major security implications, these adversarial examples had been studied extensively over the last eight years, but in spite of enormous effort they remained a baffling phenomenon with no clear explanation. In particular, it was not clear why a tiny distance away from almost any cat image there are images which are recognized with a very high level of confidence as cars, planes, frogs, horses, or any other desired class, why the adversarial modification which turns a cat into a car does not look like a car at all, and why a network which was adversarially trained with randomly permuted labels (so that it never saw any image which looks like a cat being called a cat) still recognizes most cat images as cats. The goal of this talk is to introduce a new theory of adversarial examples, which we call the Dimpled Manifold Model. It can easily explain in a simple and intuitive way why they exist and why they have all the bizarre properties mentioned above. In addition, it sheds new light on broader issues in machine learning such as what happens to deep neural networks during regular and during adversarial training. Experimental support for this theory, obtained jointly with Oriel Ben Shmuel and Odelia Melamed, will be presented and discussed in the last part of the talk.
WednesdayApr 21, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Manish PatnaikTitle:Regularized Eisenstein series on Loop Groups over Function fieldsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
Some time ago, A. Braverman and D. Kazhdan introduced a regularized Eisenstein series on a loop group over a function field and explained how its meromorphic continuation would yield information about automorphic L-functions on finite-dimensional groups (i.e. it would give an extension of Langlands-Shahidi to loop groups). We will give a (new?) construction of these regularized Eisenstein series and then explain what we know about their continuation. This is joint work in progress with S.D. Miller and H. Garland.
ThursdayApr 08, 202112:15
Vision and AI
Speaker:Shai BagonTitle:Ultrasound. Lung. Deep LearningAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/92845638929?pwd=MGM0RWY4TTc3UVNUOHpFMWlLcms3Zz09
Lung ultrasound (LUS) is a cheap, safe and non-invasive imaging modality that can be performed at patient bed-side. However, to date LUS is not widely adopted due to lack of trained personnel required for interpreting the acquired LUS frames. In this work we propose a framework for training deep artificial neural networks for interpreting LUS, which may promote broader use of LUS. In our framework we explicitly address the issue of incorporating domain-specific prior knowledge to DL models. In our framework, we propose to provide a deep neural network not only the raw LUS frames as input, but explicitly inform it of these important anatomical features and artifacts in the form of additional channels containing pleural and vertical artifacts masks. By explicitly supplying this domain knowledge in this form to deep models standard off-the-shelf neural networks can be rapidly and efficiently finetuned to perform well various tasks on LUS data, such as frame classification or semantic segmentation. Our framework allows for a unified treatment of LUS frames captured by either convex or linear probes. We evaluated our proposed framework on the task of COVID-19 severity assessment using the ICLUS dataset. In particular, we finetuned simple image classification models to predict per-frame COVID-19 severity score. We also trained a semantic segmentation model to predict per-pixel COVID-19 severity annotations. Using the combined raw LUS frames and the detected lines for both tasks, our off the shelf models performed better than complicated models specifically designed for these tasks, exemplifying the efficacy of our framework.
WednesdayApr 07, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Siddhartha SahiTitle:A reduction principle for Fourier coefficients of automorphic formsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
We consider a general class of Fourier coefficients for an automorphic form on a finite cover of a reductive adelic group G(A_K), associated to the data of a `Whittaker pair'. We describe a quasi-order on Fourier coefficients, and an algorithm that gives an explicit formula for any coefficient in terms of integrals and sums involving higher coefficients. The maximal elements for the quasi-order are `Levi-distinguished' Fourier coefficients, which correspond to taking the constant term along the unipotent radical of a parabolic subgroup, and then further taking a Fourier coefficient with respect to a K-distinguished nilpotent orbit in the Levi quotient. Thus one can express any Fourier coefficient, including the form itself, in terms of higher Levi-distinguished coefficients. In follow-up papers we use this result to determine explicit Fourier expansions of minimal and next-to-minimal automorphic forms on split simply-laced reductive groups, and to obtain Euler product decompositions of their top Fourier coefficients. This is joint work with Dmitry Gourevitch, Henrik P. A. Gustafsson, Axel Kleinschmidt, and Daniel Persson https://arxiv.org/abs/1811.05966
WednesdayMar 17, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Paul Nelson Title:The orbit method, microlocal analysis and applications to L-functionsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
I will describe how the orbit method can be developed in a quantitative form, along the lines of microlocal analysis, and applied to local problems in representation theory and global problems concerning automorphic forms. The local applications include asymptotic expansions of relative characters. The global applications include moment estimates and subconvex bounds for L-functions. These results are the subject of two papers, the first joint with Akshay Venkatesh: https://arxiv.org/abs/1805.07750 https://arxiv.org/abs/2012.0218
WednesdayMar 10, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:David VoganTitle:Weyl group representations and Harish-Chandra cellsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
Suppose g is a semisimple Lie algebra with Weyl group W. Write L(w) for the irreducible highest weight module of highest weight -w.rho - rho. Write J (for "Joseph") for the set of primitive ideals in a semisimple enveloping algebra contained in the augmentation ideal. In a 1978 paper "W-module structure in the primitive spectrum..." Joseph attached to each primitive ideal I in J a subset Lcell(I) = {w in W | Ann(L(w)) = I}. He showed also how to make Lcell(I) into a basis for a representation sigma(I) of W, in such a way that sum_{I in J} sigma(I) = regular representation of W. These representations sigma(I) are now called "left cell representations," terminology that is apparently due to Joseph (see his 1981 paper "Goldie rank in the enveloping algebra...III," page 310). Joseph proved in a 1980 paper that each left cell representation consists of exactly one copy of Joseph's "Goldie rank representation" for the primitive ideal I, and some additional representations that are not Goldie rank representations. For the past forty years, understanding of these left cell representations of W has been at the heart of a great deal of work on representations of reductive groups. Lusztig in his 1984 book gave a description of all left cells in terms of the geometry of nilpotent orbits. Part of Lusztig's description uses Springer's parametrization of W representations by irreducible representations of the equivariant fundamental group A(O) for a nilpotent orbit O. I will discuss the "opposite" part of Lusztig's description, involving conjugacy classes in A(O).
WednesdayMar 03, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Tian Fangyang Title:Period Relations of Standard $L$-Functions of Symplectic TypeAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
A classical result of Euler says that the value of the Riemann-Zeta function at a positive even integer $2k$ is a rational multiple of $\pi^{2k}$. In the 1970s, a successive pioneering work of G. Shimura revealed the relation of different critical values of $L$-function that are attached to modular forms of $\mathrm{GL}_2$. This type of result, conjectured by D. Blasius for general linear groups, is called period relation of a certain automorphic $L$-function, which is closely related to a celebrated conjecture of P. Deligne. In this talk, I will discuss my work joint with Dihua Jiang and Binyong Sun on the period relation for the twisted standard L-function $L(s, \Pi\otimes\chi)$, where $\Pi$ is an irreducible cuspidal automorphic representation of $GL_{2n}(\mathbb{A})$ which is regular algebraic and of symplectic type. Along this talk, I will also discuss the key ingredient of this project - the existence of uniform cohomological test vector, which provides the most precise information on the archimedean local integrals.
MondayMar 01, 202110:30
Foundations of Computer Science Colloquium
Speaker:David WajcTitle:Rounding Dynamic Matchings Against an Adaptive AdversaryAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/91391838878?pwd=Q2JpT29PMmthUTc5V01ScmZSUDEvZz09
Dynamic algorithms address dynamically-evolving datasets, and show how to maintain solutions to problems of interest subject to small updates to the input much faster than re-computing these solutions from scratch after each update. For example, the dynamic matching problem asks us to maintain an approximate maximum matching under edge updates (additions+deletions). For this well-studied problem we know numerous dynamic algorithms, the fastest of which are randomized. Unfortunately, until recently, all these fast randomized algorithms assumed that the updates to the input are generated in advance, rather than adaptively, based on the algorithm's random coin tosses. This assumption, referred to as the oblivious adversary assumption in the literature, rules out the black-box use of these algorithms for speeding up other (dynamic or static) algorithms. In this talk, I will present my recent framework for obtaining (essentially) the same randomized bounds previously known to be obtainable against an oblivious adversary, but this time against an adaptive adversary.
WednesdayFeb 24, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Pierre-Henri Chaudouard Title:Recent progress on the Gan-Gross-Prasad conjecture for unitary groups Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
The talk is based on a joint work with Raphaël Beuzart-Plessis and Michal Zydor and on an ongoing joint work with Raphaël Beuzart-Plessis. The Gan-Gross-Prasad (GGP) conjecture relates the non-vanishing of some periods of cuspidal automorphic forms to that of central values of some related L-functions. In the case of unitary groups U(n)xU(n+1), a lot of progress have been made by a deep study of the Jacquet-Rallis trace formula. We will describe some recent results that can be obtained by a further analysis of the trace formula. Besides the "endoscopic cases" of the GGP conjecture, we get an extension to the periods of some Eisenstein series. We will also discuss applications we can get to the so-called Bessel periods of unitary groups.
SundayFeb 21, 202109:00
Algebraic Geometry and Representation Theory Seminar
Speaker:Paul Nelson Title:The orbit method,microlocal analysis and applications to L-functionsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
I will describe how the orbit method can be developed in a quantitative form, along the lines of microlocal analysis, and applied to local problems in representation theory and global problems concerning automorphic forms. The local applications include asymptotic expansions of relative characters. The global applications include moment estimates and subconvex bounds for L-functions. These results are the subject of two papers, the first joint with Akshay Venkatesh:
WednesdayFeb 17, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Eyal KaplanTitle:The generalized doubling method, multiplicity one and applications.Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
The doubling method of Piatetski-Shapiro and Rallis pioneered the study of integral representations of automorphic L-functions, for cuspidal representations (generic or otherwise) of classical groups. Originating in the calculation of the Petersson inner product of the theta lift, this method has been successfully applied within the theory of the theta correspondence, and had numerous additional applications to the theory of L-functions and to arithmetic problems. Recently, the doubling method has been generalized in several aspects with interesting applications to global functoriality, automorphic descent and the study of representations of covering groups. In this talk I will survey the different components of the generalized doubling method, focusing on one of the fundamental results: Local multiplicity one, obtained recently in a joint work with Dima and Rami. I will also describe a new GL(c) x GL(k) doubling type integral which interpolates between the integrals of Godement and Jacquet and the integrals of Jacquet, Piatetski-Shapiro and Shalika. This integral was used in order to obtain certain poles within the doubling construction. Parts of the talk are also based on a collaboration with Cai, Friedberg and Ginzburg.
WednesdayFeb 10, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Dmitry TimashevTitle:Real orbits in spherical homogeneous spacesAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
The talk is based on a joint work in progress with Stéphanie Cupit-Foutou. Given a homogeneous variety X for a complex algebraic group G defined over real numbers, the real Lie group G(R) usually acts non-transitively (but with finitely many orbits) on the real locus X(R). A natural problem, to which many classification problems in algebra and geometry reduce, is to describe the orbits of G(R) on X(R). We address this problem for spherical homogeneous spaces, G being a connected reductive group. In this talk I'll concentrate on two cases: (1) X is a symmetric space; (2) G is split over R. The answer is similar in both cases: the G(R)-orbits are classified by the orbits of a finite group, which coincides or is closely related to the so-called "little Weyl group" W_X, acting in a fancy way on the set of orbits of T(R) in Z(R), where T is a maximal torus in G and Z is a "Brion-Luna-Vust slice" in X. The latter orbit set can be described combinatorially. We use different tools: Galois cohomology in (1) and action of minimal parabolic subgroups on Borel orbits together with Knop's theory of polarized cotangent bundle in (2). We expect that the second approach can be extended to the non-split case.
WednesdayFeb 03, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Jeffrey AdamsTitle:What Atlas can doAbstract:opens in new windowin html    pdfopens in new window
The Atlas of Lie groups and representations is a project to use computational methods in representation theory, and in particular to compute the unitary dual. Our algorithms are implemented in the atlas software (www.liegroups.org). Besides computing unitary representations, the software is useful in performing a wide variety of calculations in Lie theory. I will give some examples of what the software can do. Examples include: geometry of the action of symmetric subgroups on the flag variety; the poset of equal rank subgroups of a reductive group; fine structure of centralizers of nilpotent elements; and computation of all unipotent representations of real exceptional groups.
ThursdayJan 28, 202112:15
Vision and AI
Speaker:Shai Avidan Title:Learning to SampleAbstract:opens in new windowin html    pdfopens in new window https://weizmann.zoom.us/j/91363737177?pwd=bEdNSU9tcVRCQXhjaDRPSXIyWCswUT09

There is a growing number of tasks that work directly on point clouds. As the size of the point cloud grows, so do the computational demands of these tasks. A possible solution is to sample the point cloud first. Classic sampling approaches, such as farthest point sampling (FPS), do not consider the downstream task. A recent work showed that learning a task-specific sampling can improve results significantly. However, the proposed technique did not deal with the non-differentiability of the sampling operation and offered a workaround instead. We introduce a novel differentiable relaxation for point cloud sampling that approximates sampled points as a mixture of points in the primary input cloud. Our approximation scheme leads to consistently good results on classification and geometry reconstruction applications. We also show that the proposed sampling method can be used as a front to a point cloud registration network. This is a challenging task since sampling must be consistent across two different point clouds for a shared downstream task. In all cases, our approach outperforms existing non-learned and learned sampling alternatives.

Based on the work of: Itai Lang, Oren Dovrat, and Asaf Manor

 

 https://weizmann.zoom.us/j/91363737177?pwd=bEdNSU9tcVRCQXhjaDRPSXIyWCswUT09

WednesdayJan 27, 202118:30
Superalgebra Theory and Representations Seminar
Speaker:Volodymyr MazorchukTitle:Bigrassmannian permutations and Verma modulesAbstract:opens in new windowin html    pdfopens in new windowhttps://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

In this talk I will describe how bigrassmannian permutations control the socle of the cokernel of
embeddings of Verma modules for sl_n. An application of this is a description of the socle of the cokernel of homomorphisms between Verma modules for the periplective Lie superalgebra. This is based on two joint works: one with Hankyung Ko and Rafael Mrden and another one with Chih-Whi Chen.

 

https://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

Meeting ID: 881 8925 8443

Passcode: 40320

WednesdayJan 27, 202114:00
Machine Learning and Statistics Seminar
Speaker:Haggai Maron Title:Deep Learning on Structured and Geometric DataAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98859805667?pwd=dCthazViNFM1Y2FFeCtIcitORUR3Zz09

Deep Learning of structured and geometric data, such as sets, graphs, and surfaces, is a prominent research direction that has received considerable attention in the last few years. Given a learning task that involves structured data, the main challenge is identifying suitable neural network architectures and understanding their theoretical and practical tradeoffs.

This talk will focus on a popular learning setup where the learning task is invariant to a group of transformations of the input data. This setup is relevant to many popular learning tasks and data types. In the first part of the talk, I will present a general framework for designing neural network architectures based on layers that respect these transformations. In particular, I will show that these layers can be implemented using parameter-sharing schemes induced by the group. In the second part of the talk, I will demonstrate the framework’s applicability by presenting novel neural network architectures for two widely used data types: graphs and sets. I will also show that these architectures have desirable theoretical properties and that they perform well in practice.

 

https://weizmann.zoom.us/j/98859805667?pwd=dCthazViNFM1Y2FFeCtIcitORUR3Zz09
 

TuesdayJan 26, 202116:00
Faculty Seminar
Speaker:Yuval MoskovitchTitle:Towards Reliable Data-Driven ComputationsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/97604026721?pwd=SVZnd3NsQXBqNHFhOU04bTFNQVRhQT09

Data-driven methods are increasingly being used in domains such as fraud and risk detection, where data-driven algorithmic decision making may affect human life.
The growing impact of data and data-driven systems on society makes it important that people be able to trust analytical results obtained from data-driven computations.
This can be done in two complementary ways: by providing result explanations so that the user understands the computation and the basis for the observed results; and by profiling and monitoring the data used in the computation, to make the results more reliable in the first place.

In the first part of the talk, I will present the use of provenance -- information regarding the data origin and computational process -- for providing explanations of computational results. In the second part of the talk, I will present a method for data profiling using labels, as an example of a data-focused technique to facilitate an analyst building a reliable decision-making pipeline.

 

https://weizmann.zoom.us/j/97604026721?pwd=SVZnd3NsQXBqNHFhOU04bTFNQVRhQT09

 

ThursdayJan 21, 202118:00
Vision and AI
Speaker:Or LitanyTitle:Learning on Pointclouds for 3D Scene UnderstandingAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/93960308443?pwd=b1BWMkZURjZ0THMxUisrSFpMaHNXdz09

In this talk i'll be covering several works in the topic of 3D deep learning on pointclouds for scene understanding tasks.
First, I'll describe VoteNet (ICCV 2019, best paper nomination): a method for object detection from 3D pointclouds input, inspired by the classical generalized Hough voting technique. I'll then explain how we integrated image information into the voting scheme to further boost 3D detection (ImVoteNet, CVPR 2020). In the second part of my talk I'll describe recent studies focusing on reducing supervision for 3D scene understanding tasks, including PointContrast -- a self-supervised representation learning framework for 3D pointclods (ECCV 2020). Our findings in PointContrast are extremely encouraging: using a unified triplet of architecture, source dataset, and contrastive loss for pre-training, we achieve improvement over recent best results in segmentation and detection across 6 different benchmarks for indoor and outdoor, real and synthetic datasets -- demonstrating that the learned representation can generalize across domains. 

 

https://weizmann.zoom.us/j/93960308443?pwd=b1BWMkZURjZ0THMxUisrSFpMaHNXdz09

ThursdayJan 21, 202118:00
Machine Learning and Statistics Seminar
Speaker:Or Litany Title:Learning on Pointclouds for 3D Scene UnderstandingAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/93960308443?pwd=b1BWMkZURjZ0THMxUisrSFpMaHNXdz09

In this talk i'll be covering several works in the topic of 3D deep learning on pointclouds for scene understanding tasks.
First, I'll describe VoteNet (ICCV 2019, best paper nomination): a method for object detection from 3D pointclouds input, inspired by the classical generalized Hough voting technique. I'll then explain how we integrated image information into the voting scheme to further boost 3D detection (ImVoteNet, CVPR 2020). In the second part of my talk I'll describe recent studies focusing on reducing supervision for 3D scene understanding tasks, including PointContrast -- a self-supervised representation learning framework for 3D pointclods (ECCV 2020). Our findings in PointContrast are extremely encouraging: using a unified triplet of architecture, source dataset, and contrastive loss for pre-training, we achieve improvement over recent best results in segmentation and detection across 6 different benchmarks for indoor and outdoor, real and synthetic datasets -- demonstrating that the learned representation can generalize across domains.

 

 

https://weizmann.zoom.us/j/93960308443?pwd=b1BWMkZURjZ0THMxUisrSFpMaHNXdz09

WednesdayJan 20, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Li Huajie Title:An infinitesimal variant of Guo-Jacquet trace formulae and its comparisonAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425

This talk is based on my thesis supervised by P.-H. Chaudouard. The conjecture of Guo-Jacquet is a promising generalization to higher dimensions of Waldspurger’s well-known theorem on the relation between toric periods and central values of automorphic L-functions for GL(2). However, we are faced with divergent integrals when applying the relative trace formula approach. In this talk, we study an infinitesimal variant of this problem. Concretely, we establish global and local trace formulae for infinitesimal symmetric spaces of Guo-Jacquet. To compare regular semi-simple terms, we present the weighted fundamental lemma and certain identities between Fourier transforms of local weighted orbital integrals

 

HTTPS://WEIZMANN.ZOOM.US/J/98304397425

MondayJan 18, 202116:00
Foundations of Computer Science Colloquium
Speaker:Kira GoldnerTitle:Mechanism Design for a Matching Platform: On Multi-Dimensional Gains from Trade MaximizationAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/91630184141?pwd=djRXWjBVUFZRMCsvM3h6T1hFbTZrdz09

Consider the algorithm designer for a two-sided market with buyers and sellers, such as Airbnb or Uber. The platform's goal is to design a mechanism that optimizes the "gains from trade": the aggregate gain in the market due to the trades (e.g. in items, labor) that have occurred.

In this talk, I will present the first guarantees on gains from trade for a market that contains a multi-dimensional buyer, who is interested in multiple different kinds of items, and single-dimensional sellers, who each are only interested in selling the single item that they own. We present a logarithmic approximation to the optimal gains from trade in a parameter of the market. We also provide a logarithmic approximation in the number of sellers to the constrained-optimal gains from trade: optimal subject to standard truthfulness constraints. Our techniques come from a variety of domains: including online algorithms and revenue maximization.

Joint work with Yang Cai, Steven Ma, and Mingfei Zhao

 

 https://weizmann.zoom.us/j/91630184141?pwd=djRXWjBVUFZRMCsvM3h6T1hFbTZrdz09

ThursdayJan 14, 202112:15
Vision and AI
Speaker:Yoav Shechtman Title:Next generation localization microscopy - or - how and why to ruin a perfectly good microscopeAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/95427957584?pwd=QWhYMURYU3UxaE43TzhEOFJyV2J2UT09

In localization microscopy, the positions of individual nanoscale point emitters (e.g. fluorescent molecules) are determined at high precision from their point-spread functions (PSFs). This enables highly precise single/multiple-particle-tracking, as well as super-resolution microscopy, namely single molecule localization microscopy (SMLM). To obtain 3D localization, we employ PSF engineering – namely, we physically modify the standard PSF of the microscope, to encode the depth position of the emitter.  In this talk I will describe how this method enables unprecedented capabilities in localization microscopy; specific applications include dense emitter fitting for super-resolution microscopy, multicolor imaging from grayscale data, volumetric multi-particle tracking/imaging, dynamic surface profiling, and high-throughput in-flow colocalization in live cells. We often combine the optical encoding method with neural nets (deep-learning) for decoding, i.e. image reconstruction; however, our use of neural nets is not limited to image processing - we use nets to design the optimal optical acquisition system in a task-specific manner.

 

https://weizmann.zoom.us/j/95427957584?pwd=QWhYMURYU3UxaE43TzhEOFJyV2J2UT09

ThursdayJan 14, 202111:00
Faculty Seminar
Speaker:Hila Peleg Title:Next Generation Programming with Program SynthesisAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/97604026721?pwd=SVZnd3NsQXBqNHFhOU04bTFNQVRhQT09

Program synthesis is the problem of generating a program to satisfy a specification of user intent. Since these specifications are usually partial, this means searching a space of candidate programs for one that exhibits the desired behavior. The lion's share of the work on program synthesis focuses on new ways to perform the search, but hardly any of this research effort has found its way into the hands of users.

We wish to use synthesis to augment the programming process, leveraging both optimized search algorithms and concepts that are part of the programmer's life such as code review and read-eval-print loops (REPL). This talk describes three synthesis-based techniques that bring program synthesis into the development workflow.

A major concern in designing for the user is that it can put the interface of the synthesizer at odds with state of the art synthesis techniques. Synthesis is, at best, a computationally hard problem, and any changes made to make the tool more usable can interfere with the synthesizer and its internals. We therefore demonstrate the process of bringing synthesis theory into practice when tool design also requires an algorithm re-design.

https://weizmann.zoom.us/j/97604026721?pwd=SVZnd3NsQXBqNHFhOU04bTFNQVRhQT09

WednesdayJan 13, 202116:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Binyong Sun Title:Low degree cohomologies of congruence groupsAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425

We determine certain low degree cohomologies of locally symmetric spaces, using representation theory. Basic theory of continuous cohomologies will be reviewed. This is a joint work with Jian-Shu Li

 

HTTPS://WEIZMANN.ZOOM.US/J/98304397425

 

WednesdayJan 13, 202115:15
Machine Learning and Statistics Seminar
Speaker:Barak Sober Title:Estimation of Manifolds from Point Clouds: Building Models from Data Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/95680998523?pwd=N2tRL1dydlV3VE9uOUVTWmU3UGN1QT09

A common observation in data-driven applications is that high dimensional data has a low intrinsic dimension, at least locally. Thus, when one wishes to work with data that is not governed by a clear set of equations, but still wishes to perform statistical or other scientific analysis, an optional model is the assumption of an underlying manifold from which the data was sampled. This manifold, however, is not given explicitly but we can obtain samples of it (i.e., the individual data points). In this talk, we will consider the mathematical problem of estimating manifolds from a finite set of samples, possibly recorded with noise. Using a Manifold Moving Least-Squares approach we provide an approximant manifold with high approximation order in both the Hausdorff sense as well as in the Riemannian metric (i.e., a nearly isometry). In the case of bounded noise, we present an algorithm that is guaranteed to converge in probability when the number of samples tends to infinity. The motivation for this work is based on the analysis of the evolution of shapes through time (e.g., handwriting or primates' teeth) and we will show how this framework can be utilized to answer scientific questions in paleontology and archaeology. 

 

https://weizmann.zoom.us/j/95680998523?pwd=N2tRL1dydlV3VE9uOUVTWmU3UGN1QT09

ThursdayJan 07, 202112:15
Vision and AI
Speaker:Guy Gaziv Title:Decoding visual experience from brain activity Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/94895425759?pwd=RTh3VkMyamJOay96N3hDcWg0eFpqUT09
Deep Learning introduced powerful research tools for studying visual representation in the human brain. Here, we harnessed those tools for two branches of research: 1. The primary branch focuses on brain decoding: reconstructing and semantically classifying observed natural images from novel (unknown) fMRI brain recordings. This is a very difficult task due to the scarce supervised "paired" training examples (images with their corresponding fMRI recordings) that are available, even in the largest image-fMRI datasets. We present a self-supervised deep learning approach that overcomes this barrier. This is obtained by enriching the scarce paired training data with additional easily accessible "unpaired" data from both domains (i.e., images without fMRI, and fMRI without images). Our approach achieves state-of-the-art results in image reconstruction from fMRI responses, as well as unprecedented large-scale (1000-way) semantic classification of never-before-seen classes. 2. The secondary branch of research focuses on face representation in the human brain. We studied whether the unique structure of the face-space geometry, which is defined by pairwise similarities in activation patterns to different face images, constitutes a critical aspect in face perception. To test this, we compared the pairwise similarity between responses to face images of human-brain and of artificial Deep Convolutional Neural Networks (DCNN) that achieve human-level face recognition performance. Our results revealed a stark match between neural and intermediate DCNN layers' face-spaces. Our findings support the importance of face-space geometry in enabling face perception as well as a pictorial function of high-order face-selective regions of the human visual cortex. https://weizmann.zoom.us/j/94895425759?pwd=RTh3VkMyamJOay96N3hDcWg0eFpqUT09
WednesdayJan 06, 202118:30
Superalgebra Theory and Representations Seminar
Speaker:Shifra ReifTitle:Grothendieck rings for queer Lie superalgebrasAbstract:opens in new windowin html    pdfopens in new windowHTTPS://US02WEB.ZOOM.US/J/88189258443?PWD=S3JLCELXTUPADKTQZ0VLWHNMVXDIQT09
The abstract can be found at the following seminar website:
 
https://www.math.bgu.ac.il/~entova/STARS.html
ThursdayDec 31, 202012:15
Vision and AI
Speaker:Yedid HoshenTitle:Demystifying Unsupervised Image Translation through the Lens of Attribute Disentanglement Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/93320818883?pwd=VE1STDVrS3BPMThHenlqZXhxbDg3dz09

Recent approaches for unsupervised image translation are strongly reliant on generative adversarial training and ad-hoc architectural locality constraints. Despite their appealing results, it can be easily observed that the learned class and content representations are entangled which often hurts the translation performance. We analyse this task under the framework of image disentanglement into meaningful attributes. We first analyse the simpler setting, where the domain of the image and its other attributes are independent. By information arguments, we present a non-adversarial approach (LORD) that carefully designed an information bottleneck for class-content disentanglement. Our approach brings attention to several interesting and poorly explored phenomena, particularly the beneficial inductive biases of latent optimization and conditional generators, and it outperforms the top adversarial and non-adversarial class-content disentanglement methods (e,g, DrNet and MLVAE). By further information constraints, we extend our approach to the standard unsupervised image translation task where the unknown image properties are dependent on the domain. Our full approach surpasses the top unsupervised image translation methods (e.g. FUNIT and StartGAN-v2). 


https://weizmann.zoom.us/j/93320818883?pwd=VE1STDVrS3BPMThHenlqZXhxbDg3dz09

 

WednesdayDec 30, 202018:30
Superalgebra Theory and Representations Seminar
Speaker:Krishanu Roy Title:Pi-systems and closed systems in symmetrizable Kac-Moody algebrasAbstract:opens in new windowin html    pdfopens in new windowhttps://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

Dynkin introduced the notion of pi-systems in the process of classifying semi-simple subalgebras of complex semi-simple Lie algebras. In a complex simple Lie algebra, pi-systems turn out to be the simple system of regular subalgebras. One can naturally generalize the notion of pi-systems to Kac-Moody algebras. In this talk, I will talk about the classification of maximal pi-systems of affine Kac-Moody algebras. If time permits, I will also talk about the Weyl orbits of certain interesting pi-systems. 

 

https://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

 

WednesdayDec 23, 202018:30
Superalgebra Theory and Representations Seminar
Speaker:Alexander Sherman Title:Around the classification of semisimple algebraic supergroupsAbstract:opens in new windowin html    pdfopens in new windowhttps://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

We will discuss the classification of algebraic supergroups G for which their representation category Rep(G) is semisimple (working over an algebraically closed field of characteristic zero). The statement roughly says that OSp(1|2n) is the only 'truly super' algebraic supergroup with this property. We will discuss different proofs and related ideas, with the goal of understanding in some ways how non-semisimplicity expresses itself in a supergroup.

 

https://us02web.zoom.us/j/88189258443?pwd=S3JLcElXTUpadktqZ0VLWHNmVXdiQT09

WednesdayDec 23, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Shamgar Gurevich Title:Harmonic Analysis on GLn over Finite FieldsAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425

There are many formulas that express interesting properties of a finite group G in terms of sums over its characters. For estimating these sums, one of the most salient quantities to understand is the character ratio
Trace(p(g)) / dim(p),
for an irreducible representation p of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of the mentioned type for analyzing certain random walks on G.
Recently, we discovered that for classical groups G over finite fields there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank.
Rank suggests a new organization of representations based on the very few "small" ones. This stands in contrast to Harish-Chandra's philosophy of cusp forms, which is (since the 60s) the main organization principle, and is based on the (huge collection of) "Large" representations.

This talk will discuss the notion of rank for the group GLn over finite fields, demonstrate how it controls the character ratio, and explain how one can apply the results to verify mixing time and rate for random walks.
This is joint work with Roger Howe (Yale and Texas A&M). The numerics for this work was carried with Steve Goldstein (Madison) and John Cannon (Sydney).


HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdayDec 23, 202011:15
Machine Learning and Statistics Seminar
Speaker:Yossi ArjevaniTitle:Computational Barriers in Continuous Optimization: Two Complexity Theories and One Tale of SymmetryAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/93958126972?pwd=STg0b0JMUFI5eS9rVEd0QXpRY2V2UT09

Since the modern formulation of mathematical optimization, researchers in the field have repeatedly expanded and re-defined the realm of tractable optimization problems. This endeavor has culminated in the well-known class of convex optimization problems with applications in a wide range of scientific fields. In this talk, I will present the traditional oracle-based complexity model, which has dominated (unstructured) continuous optimization for the past 40 years, and highlight some of its successes and failures in predicting the hardness of convex optimization problems. I will then introduce a novel structural-based model aimed at addressing major oracle-based complexity gaps. The new approach is intimately tied with approximation theory, and is proven to be particularly advantageous for characterizing the complexity of optimization methods in machine learning.

Studies in recent years have indicated that the realm of tractable optimization problems may be expanded once more -- this time into that of the nonconvex world. In the second part of the talk, I will present a novel symmetry-based approach to study nonconvex optimization landscapes, and use it to address optimization-related aspects of neural networks. The approach employs techniques, new to the field, from symmetry breaking and representation theory, and carries important implications for one’s ability to argue about statistical generalization through local curvature.

https://weizmann.zoom.us/j/93958126972?pwd=STg0b0JMUFI5eS9rVEd0QXpRY2V2UT09

 

 

TuesdayDec 22, 202015:00
Machine Learning and Statistics Seminar
Speaker:Nadav DymTitle:On the injectivity and (in)stability of invariant encodingAbstract:opens in new windowin html    pdfopens in new window https://weizmann.zoom.us/j/93648867660?pwd=dVpDeHAyRk5IbGFOTWVvS3pTZnF3UT09

Quotient spaces are a natural mathematical tool to describe a variety of algorithmic problems in computer vision and related fields, where different objects are to be compared while their natural symmetries are to be ignored. The computational complications arising from these symmetries can be alleviated by mapping the given object to features invariant to the object’s symmetries. But can this be done without losing information? In this talk we will discuss this question for two different problems:

(a) Neural networks for point clouds: The natural symmetries of 3D point clouds are permutations and rigid motions. We will describe our recent work which provides a general framework for constructing such invariant networks to be lossless, in the sense that they can approximate any continuous invariant function. As a result, invariant networks such as Tensor Field Networks are shown to have universal approximation power.

(b) Phase retrieval: Phase retrieval is the problem of retrieving a signal, up to global phase, from linear measurements whose phase is lost. The phase ambiguity here can be considered as the symmetries of the problem, and the phaseless linear measurements as the invariants. We will discuss results on the injectivity and stability of phase retrieval, and particularly our results connecting phase retrieval stability to measures of graph connectivity.

To conclude, we will point out some insights which can be obtained from viewing these two different problems in the same framework.

 

 https://weizmann.zoom.us/j/93648867660?pwd=dVpDeHAyRk5IbGFOTWVvS3pTZnF3UT09

ThursdayDec 17, 202012:15
Vision and AI
Speaker:Yuval AtzmonTitle:A causal view of compositional zero-shot recognitionAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/94138183037?pwd=amZIRXgzSElQaXozL0UxdlFCRmJuQT09

People easily recognize new visual categories that are new combinations of known components. This compositional generalization capacity is critical for learning in real-world domains like vision and language because the long tail of new combinations dominates the distribution. Unfortunately, learning systems struggle with compositional generalization because they often build on features that are correlated with class labels even if they are not "essential" for the class. This leads to consistent misclassification of samples from a new distribution, like new combinations of known components.

In this talk I will present our work on compositional zero-shot recognition: I will describe a causal approach that asks "which intervention caused the image?"; A probabilistic approach using semantic soft “and-or” side-information; And last, an approach that is simultaneously effective for both many-shot and zero-shot classes.

 

https://weizmann.zoom.us/j/94138183037?pwd=amZIRXgzSElQaXozL0UxdlFCRmJuQT09

 

WednesdayDec 16, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Fan GaoTitle:Theta representations and their wavefront setsAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425

For a linear algebraic group, a theta representation is just a character of the group; for a central covering group, it is the Langlands quotient of a certain regular genuine principal series, and is (hypothetically) the lift of a character on a certain linear endoscopic group. In this talk, we will explain a conjectural formula on the leading wavefront set (and thus the Gelfand-Kirillov dimension) of a theta representation. We also discuss about some evidence for its validity, including compatibility with works in the literature. The talk is based on a joint work with Wan-Yu Tsai

 

HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdayDec 16, 202010:00
Superalgebra Theory and Representations Seminar
Speaker:Kevin Coulembier Title:A semisimple extension of the Takiff superalgebra Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09

In this talk, I will give an overview of the (finite dimensional) representation theory of a particular semisimple Lie superalgebra. In particular, I will explain character formulas, extension groups, block decompositions, invariant theory and Koszulity.

 

https://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09

ThursdayDec 10, 202012:15
Vision and AI
Speaker:Tomer Michaeli Title:GANs: Origins, Effective Training, and SteeringAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/96344142954?pwd=RlhsVk10cWlMMm5WdGhtenRvZGQyQT09

Since their introduction by Goodfellow et al. in 2014, generative adversarial models seem to have completely transformed Computer Vision and Graphics. In this talk I will address three questions: (1) What did we do before the GAN era (and was it really that different)? (2) Is the way we train GANs in line with the theory (and can we do it better)? (3) How is information about object transformations encoded in a pre-trained generator?

I will start by showing that Contrastive Divergence (CD) learning (Hinton ‘02), the most widely used method for learning distributions before GANs, is in fact also an adversarial procedure. This settles a long standing debate regarding the objective that this method actually optimizes, which arose due to an unjustified approximation in the original derivation. Our observation explains CD’s great empirical success.

Going back to GANs, I will challenge the common practice for stabilizing training using spectral-normalization. Although theoretically motivated by the Wasserstein GAN formulation, I will show that this heuristic works for different reasons and can be significantly improved upon. Our improved approach leads to state-of-the-art results in many common tasks, including super-resolution and image-to-image-translation.

Finally, I will address the task of revealing meaningful directions in the latent space of a pre-trained GAN. I will show that such directions can be computed in closed form directly from the generator's weights, without the need of any training or optimization as done in existing works. I will particularly discuss nonlinear trajectories that have natural endpoints and allow controlling whether one transformation is allowed to come on the expense of another (e.g. zoom-in with or without allowing translation to keep the object centered).

* These are joint works with Omer Yair, Idan Kligvasser, Nurit Spingarn, and Ron Banner.

https://weizmann.zoom.us/j/96344142954?pwd=RlhsVk10cWlMMm5WdGhtenRvZGQyQT09

 

WednesdayDec 09, 202018:30
Superalgebra Theory and Representations Seminar
Speaker:Thorsten Heidersdorf Title:Indecomposable summands in tensor products Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09

I will report on some progress to understand indecomposable summands in tensor products of irreducible representations of gl(m|n). I will focus on the gl(m|2) -case (m>1) which exhibits many features of the general case. The crucial tool is the Duflo-Serganova functor and some of its variants.

 

https://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09

 

WednesdayDec 02, 202018:30
Superalgebra Theory and Representations Seminar
Speaker:Nikolay GrantcharovTitle:Finite-dimensional representation theory of the queer Lie superalgebra q(n) Abstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/91984558186?PWD=DZL2Z2HPRWTZADF6R1PYSXD4OGFIQT09

We will first describe some classical theory of the queer Lie superalgebras q(n), such as Clifford modules, (generic) character formula for the irreducible representations, and classification of the blocks of finite-dimensional representations. Then we will focus our attention to q(3) and provide an explicit description of the Ext-quivers of the blocks. A proof of a ``virtual'' BGG reciprocity for q(n), which then gives the radical filtrations of indecomposable projective objects for q(3), will be provided.

 

 

 

 

HTTPS://WEIZMANN.ZOOM.US/J/91984558186?PWD=DZL2Z2HPRWTZADF6R1PYSXD4OGFIQT09

WednesdayDec 02, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Vincent Secherre Title:Selfdual cuspidal representations of GL(r,D) and distinction by an inner involutionAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425

Let n be a positive integer, F be a non-Archimedean locally compact field of odd residue characteristic p and G be an inner form of GL(2n,F). This is a group of the form GL(r,D) for a positive integer r and division F-algebra D of reduced degree d such that rd=2n. Let K be a quadratic extension of F in the algebra of matrices of size r with coefficients in D, and H be its centralizer in G. We study selfdual cuspidal representations of G and their distinction by H, that is, the existence of a nonzero H-invariant linear form on such representations, from the viewpoint of type theory. When F has characteristic 0, we characterize distinction by H for cuspidal representations of G in terms of their Langlands parameter, proving in this case a conjecture by Prasad and Takloo-Bighash.

 

 

HTTPS://WEIZMANN.ZOOM.US/J/98304397425

 

WednesdayNov 25, 202018:30
Superalgebra Theory and Representations Seminar
Speaker:Inna Entova-Aizenbud Title:Tensor categories and the superworldAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09

I will give a series of introductory talks on tensor categories and how they give a natural setting for everything "super", such as Lie superalgebras and supergroups.No knowledge of category theory will be assumed - I will explain the categorical terms along the way.

 

https://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09

 
 
 

 

 

 

 

WednesdayNov 25, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Nir AvniTitle:: Equations in arithmetic groups and Model TheoryAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425

I'll talk about a new dichotomy between arithmetic groups of rank one and arithmetic groups of rank bigger than one. Namely, whereas the category of definable sets in free groups or surface groups is simple, the category of definable sets in many higher rank arithmetic groups is as bad as it gets---it is equivalent to the category of definable sets over the natural numbers. One consequence of this phenomenon is that if G is such a higher rank arithmetic group, then there is an axiom---a first order statement---that holds for G but doesn't hold for any other finitely generated group.

 

 

HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdayNov 18, 202018:30
Superalgebra Theory and Representations Seminar
Speaker:Inna Entova-Aizenbud Title:Tensor categories and the superworldAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09

I will give a series of introductory talks on tensor categories and how they give a natural setting for everything "super", such as Lie superalgebras and supergroups. 
No knowledge of category theory will be assumed - I will explain the categorical terms along the way.

 

https://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09

 

 

 

 

WednesdayNov 18, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Zhilin Luo Title:A local trace formula for the local Gan-Gross-Prasad conjecture for special orthogonal groupsAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425
The local Gan-Gross-Prasad conjecture studies the restriction and branching problems for representations of classical and metaplectic groups. In this talk, I will talk about my proof for the tempered part of the local Gan-Gross-Prasad conjecture (multiplicity one in Vogan packets) for special orthogonal groups over any local fields of characteristic zero, which combines the work of Waldspurger (for the tempered part of the conjecture for special orthogonal groups over $p$-adic fields) and Beuzart-Plessis (for the tempered part of the conjecture for unitary groups over real field) in a non-trivial way. In the proof, an indispensable result which is also of independent interest is a formula expressing the regular nilpotent germs of quasi-split reductive Lie algebras over any local fields of characteristic zero via endoscopic invariants, which was previously proved by Shelstad over $p$-adic fields. We also relate the formula with the Kostant's sections. HTTPS://WEIZMANN.ZOOM.US/J/98304397425
WednesdayNov 11, 202018:30
Superalgebra Theory and Representations Seminar
Speaker:Maria Gorelik Title:Introduction to Lie Superalgebras Abstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09
WednesdayNov 11, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Birgit Speh Title:The decomposition of discrete seriesAbstract:opens in new windowin html    pdfopens in new windowHTTPS://WEIZMANN.ZOOM.US/J/98304397425

B. Gross and D. Prasad _rst formulated their famous conjectures about

the restriction of representations of discrete series representations in

the original paper Discrete series of an orthogonal group G = SOn

when restricted to an orthogonal subgroup G' = SOn-1 published in

1992 . In this lecture we consider representations _ in the discrete

spectrum of L2(G=H) where G=SO(p,q) and H = G_ for an involution

_ and their restriction to a subgroup G'=SO(p-1,q). I formulate similar

conjectures relating the restriction to interlacing patterns and in the

last part I discuss the evidence supporting these conjectures.

 

 

HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdayNov 04, 202018:30
Superalgebra Theory and Representations Seminar
Speaker:Maria Gorelik Title:Superalgebra Theory and Representations SeminarAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/91984558186?pwd=dzl2Z2hPRWtzaDF6R1pySXd4OGFIQT09
WednesdayNov 04, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:David Soudry Title:Double descent for classical groupsAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425
We consider the generalized doubling integrals of Cai, Friedberg, Ginzburg and Kaplan. These generalize the doubling method of Piatetski-Shapiro and Rallis and represent the standard L-function for pairs of irreducible, automorphic, cuspidal representations \pi - on a (split) classical group G, and \tau - onGL(n). The representation \pi need not have any particular model (such as a Whittaker model, or a Bessel model). These integrals suggest an explicit descent map (an inverse to Langlands functorial lift) from GL(n) to G(appropriate G). I will show that a certain Fourier coefficient applied to a residual Eisenstein series, induced from a Speh representation, corresponding to a self-dual \tau, is equal to the direct sum of irreducible cuspidal representations \sigma \otimes \sigma', on G x G , where \sigma runs over all irreducible cuspidal representations, which lift to \tau (\sigma' is the complex conjugate of an outer conjugation of \sigma). This is a joint work with David Ginzburg. https://weizmann.zoom.us/j/98304397425
WednesdayOct 28, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Lei Zhang Title:Bessel-Fourier coefficients and Twisted Automorphic DescentAbstract:opens in new windowin html    pdfopens in new window https://weizmann.zoom.us/j/98304397425

In this talk, we will discuss the theory of twisted automorphic descents, which is an extension of the automorphic descent of Ginzburg-Rallis-Soudry.

The main goal is to construct cuspidate automorphic modules in the generic global Arthur packets by using Bessel-Fourier coefficients of automorphic representations.

Moreover, we will discuss some applications and problems related Bessel-Fourier coefficients.

This is a joint work with Dihua Jiang.

 

 https://weizmann.zoom.us/j/98304397425

 

 

WednesdayOct 28, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Lei ZhangTitle:Bessel-Fourier coefficients and Twisted Automorphic DescentAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425

In this talk, we will discuss the theory of twisted automorphic descents, which is an extension of the automorphic descent of Ginzburg-Rallis-Soudry. 
The main goal is to construct cuspidate automorphic modules in the generic global Arthur packets by using Bessel-Fourier coefficients of automorphic representations. 
Moreover, we will discuss some applications and problems related Bessel-Fourier coefficients. 
This is a joint work with Dihua Jiang.

https://weizmann.zoom.us/j/98304397425

 

 

 

 

 

 

 

Please note that Israel has switched to winter time. The seminar will be 16:30 Israeli time.

WednesdayOct 21, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Freydoon ShahidiTitle:On multiplicativity of gamma-factors and Fourier transforms via Braverman-Kazhdan programAbstract:opens in new windowin html    pdfopens in new window https://weizmann.zoom.us/j/98304397425

This is a joint work with my student William Sokurski. Braverman-Kazhdan/Ngo program aims to generalize the work of Godement-Jacquet/Tate from GL(n) to an arbitrary reductive group G and a finite dimensional representation r of its L-group. We briefly review the general concepts of the method, including Renner's construction of reductive r-monoids, objects generalizing the space of n x n matrices in the case of GL(n), concluding with the example of symmetric power representations of GL(2,C). 

We then define a space of r-Schwartz functions interms of the restriction of the conjectural r-Fourier transform to the space of smooth functions of compact support in G, as in the work of Braverman and Kazhdan. Multiplicativity which states the equality of gamma factors for the parabolically induced and inducing data, follows from a natural commutativity of corresponding Fourier transforms on G and the Levi subgroup L, sharing the fixed maximal torus defining the monoid, with a generalized Harish-Chandra transform. We finally present a candidate for the Fourier transform attached to the symmetric cube of GL(2,C) as a fiber integration over the compact-inducing data for tamely ramified supercuspidals of GL(2) constructed by Howe and Bushnell-Kutzko, as predicted by Ngo. This last result is part of Sokurski's thesis which can be extended to all the odd symmetric powers.   

 https://weizmann.zoom.us/j/98304397425

 

WednesdayOct 14, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Spencer LeslieTitle:Relative endoscopy for certain symmetric spacesAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/98304397425

Motivated by problems arising from the study of certain relative trace formulas, I discuss a notion of endoscopy in a relative setting. The main example is that of unitary Friedberg-Jacquet periods, which are related to special cycles in certain unitary Shimura varieties. After introducing the endoscopic symmetric spaces in this case, I will sketch the proof of the fundamental lemma.

https://weizmann.zoom.us/j/98304397425

MondayOct 12, 202009:00
Seminar in Geometry and Topology
Speaker:Andre Belotto da SilvaTitle:A proof of A. Gabrielov's rank TheoremAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/9183356684?pwd=QlN4T1VtcXc4QXdXNEYxUlQ4dmR5dz09

This talk concerns Gabrielov's rank Theorem, a fundamental result in local complex and real-analytic geometry, proved in the 1970's. Contrasting with the algebraic case, it is not in general true that the analytic rank of an analytic map (that is, the dimension of the analytic-Zariski closer of its image) is equal to the generic rank of the map (that is, the generic dimension of its image). This phenomena is behind several pathological examples in local real-analytic geometry. Gabrielov's rank Theorem provides a formal condition for the equality to hold

https://weizmann.zoom.us/j/9183356684?pwd=QlN4T1VtcXc4QXdXNEYxUlQ4dmR5dz09

WednesdaySep 30, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Bin XuTitle:A-packets for quasisplit GSp(2n) and GSO(2n) over a p-adic fieldAbstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

Arthur (1989) conjectured that the discrete spectrum of automorphic representations of a connected reductive group over a number field can be decomposed into global A-packets, in terms of which he also conjectured a multiplicity formula. Arthur (2013) proved his conjectures for symplectic and orthogonal groups, in which case the global A-packets are parametrized by self-dual automorphic representations of general linear groups. In this talk, I will give a construction of the local A-packets for general symplectic and general even orthogonal groups in the nonarchimedean case. This is based on our earlier works in the tempered case, and it follows a construction by Moeglin for symplectic and orthogonal groups.

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdaySep 23, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Job Kuit Title:On the little Weyl group of a real spherical space (joint with Eitan Sayag)Abstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

Let G be a connected reductive group defined over a field k of characteristic 0. Recently Knop and Krötz showed that one can attach a Weyl group to any algebraic homogeneous G-variety defined over k. This Weyl group is called the little Weyl group. In this talk I will discuss a geometric construction of the little Weyl group for a real spherical space G/H. Our technique is based on a fine analysis of limits of conjugates of the subalgebra Lie(H) along one-parameter subgroups in the Grassmannian of subspaces of Lie(G).

 

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdaySep 16, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Stephen Kudla Title:On the subring of special cycles on orthogonal Shimura varietiesAbstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

By old results with Millson, the generating series for the cohomology classes of special cycles on orthogonal Shimura varieties over a totally real field are Hilbert-Siegel modular forms. These forms arise via theta series. Using this result and the Siegel-Weil formula, we show that the products in the subring of cohomology generated by the special cycles are controlled by the Fourier coefficients of triple pullbacks of certain Siegel-Eisenstein series.
As a consequence, there are comparison isomorphisms between special subrings for different Shimura varieties. In the case in which the signature of the quadratic space V is (m,2) at an even number d_+ of archimedean places, the comparison gives a `combinatorial model' for the special cycle ring in terms of the associated totally positive definite space.

 

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdaySep 02, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Axel Kleinschmidt Title:Automorphy in string scattering and small representationsAbstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

I will review how automorphic representations arise in the calculation of string theory scattering amplitudes. As follows from the work of Green, Miller, Vanhove, Pioline and others, the automorphic representations are associated with split real groups of a certain exceptional family. In the cases that are well understood, these representation have very small Gelfand-Kirillov dimension. Their Fourier expansion can be calculated using different methods and confirms physical expectation on the wavefront set. In work with Gourevitch, Gustafsson, Persson and Sahi, the method of Whittaker pairs was employed to systematize this analysis. I will also comment on the cases that are less well understood in physics and that appear to go beyond the standard notion of automorphic representations since the usual Z-finiteness condition is violated.

 

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdayAug 26, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Bernhard Kroetz Title:Ellipticity and discrete seriesAbstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

We explain by elementary means why the existence of a discrete series representation of a real reductive group G implies the existence of a compact Cartan subgroup of G. The presented approach has the potential to generalize to real spherical spaces.

The talk will be based on https://arxiv.org/pdf/2007.15312.pdf

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdayAug 12, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Baiying Liu Title:On automorphic descent from GL(7) to G2Abstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

In this talk, I will introduce the functorial descent from cuspidal automorphic representations \pi of GL7(A) with L^S(s, \pi, \wedge^3) having a pole at s=1 to the split exceptional group G2(A), using Fourier coefficients associated to two nilpotent orbits of E7. We show that one descent module is generic, and under mild assumptions on the unramified components of \pi, it is cuspidal and having \pi as a weak functorial lift of each irreducible summand. However, we show that the other descent module supports not only the non-degenerate Whittaker integral on G2(A), but also every degenerate Whittaker integral. Thus it is generic, but not cuspidal. This is a new phenomenon, compared to the theory of functorial descent for classical and GSpin groups. This work is joint with Joseph Hundley. 

 

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

 

MondayAug 10, 202016:00
Seminar in Geometry and Topology
Speaker:Jorge Vitorio PereiraTitle:Codimension one foliations on projective manifoldsAbstract:opens in new windowin html    pdfopens in new windowZOOM: https://weizmann.zoom.us/j/97197369277?pwd=c0kyTVVDaWhLQ1NaWDJLVE9MTzBVUT09

Review of old and new results on the global structure of singular holomorphic codimension one foliations on projective manifolds and their moduli spaces.

ZOOM: https://weizmann.zoom.us/j/97197369277?pwd=c0kyTVVDaWhLQ1NaWDJLVE9MTzBVUT09 

 

MondayAug 03, 202016:00
Seminar in Geometry and Topology
Speaker:Askold KhovanskiiTitle:GOOD COMPACTIFICATION THEOREM FOR (C*)^nAbstract:opens in new windowin html    pdfopens in new windowZoom: https://weizmann.zoom.us/j/92678677338?pwd=T00rNnIxRFg1UlRYQkJIZDRuZ09aQT09

ThursdayJul 30, 202012:15
Vision and AI
Speaker:Assaf ShocherTitle:Semantic Pyramid for Image Generation Abstract:opens in new windowin html    pdfopens in new windowZoom: https://weizmann.zoom.us/j/96905267652?pwd=MGxhem1WVG1WbzRQSTVnYXZGLy81dz09

We present a novel GAN-based model that utilizes the space of deep features learned by a pre-trained classification model. Inspired by classical image pyramid representations, we construct our model as a Semantic Generation Pyramid - a hierarchical framework which leverages the continuum of semantic information encapsulated in such deep features; this ranges from low level information contained in fine features to high level, semantic information contained in deeper features. More specifically, given a set of features extracted from a reference image, our model generates diverse image samples, each with matching features at each semantic level of the classification model. We demonstrate that our model results in a versatile and flexible framework that can be used in various classic and novel image generation tasks. These include: generating images with a controllable extent of semantic similarity to a reference image, and different manipulation tasks such as semantically-controlled inpainting and compositing; all achieved with the same model, with no further training.

https://arxiv.org/abs/2003.06221

Zoom: https://weizmann.zoom.us/j/96905267652?pwd=MGxhem1WVG1WbzRQSTVnYXZGLy81dz09

 

WednesdayJul 29, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Wen-Wei Li Title:On the distinction of Harish-Chandra modules and its Ext-analogues.Abstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

 

One core problem in relative harmonic analysis is to study the space of H-invariant linear functionals on an admissible representation, where H is a spherical subgroup of a reductive group G over a local field. In this talk, I will focus on the Archimedean case in the setting of Harish-Chandra modules. I will review the interpretation of these Hom spaces in terms of certain regular holonomic D-modules on G/H (arXiv:1905.08135), under mild conditions on H. Then I will try to sketch a possible extension of this strategy to the Ext-analogues and the Euler-Poincaré numbers. This is a work in progress.

 

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

MondayJul 27, 202016:00
Seminar in Geometry and Topology
Speaker:Javier RibonTitle:The role of Puiseux characteristics in the local Poincaré problemAbstract:opens in new windowin html    pdfopens in new windowhttps://weizmann.zoom.us/j/9183356684?pwd=QlN4T1VtcXc4QXdXNEYxUlQ4dmR5dz09

The usual Poincare problem consists in determining whether or not there exist upper bounds for the degree of an invariant algebraic curve of a foliation of the complex projective plane in terms of the degree of the foliation. We are interested in the local analogue,  in which degrees are replaced with vanishing multiplicities at the origin. Since cusps of arbitrary degree can be invariant curves of foliations of degree one in the global setting and of multiplicity one in the local one, there are no universal upper bounds. In order to deal with this issue, extra hypotheses were considered, either on the invariant curve (Cerveau - Lins Neto,...) or on the desingularization of the foliation (Carnicer...). We consider an irreducible germ of invariant curve and no extra hypotheses. We provide a lower bound for the multiplicity of a germ of complex foliation in dimension 2 in terms of Puiseux characteristics of an irreducible invariant curve. Obviously, the lower bound is trivial for a cusp but it is valuable if the curve has at least two Puiseux characteristic exponents. This is a joint work with Jose Cano and Pedro Fortuny.  

Join Zoom Meeting
https://weizmann.zoom.us/j/9183356684?pwd=QlN4T1VtcXc4QXdXNEYxUlQ4dmR5dz09

 

WednesdayJul 22, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Solomon Friedberg Title:A new theta correspondenceAbstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

The classical theta correspondence establishes a relationship between automorphic representations on special orthogonal groups and automorphic representations on symplectic groups or their double covers. This correspondence is achieved by using as integral kernel a theta series that is constructed from the Weil representation. In this talk I will briefly survey earlier work on (local and global, classical and other) theta correspondences and then present an extension of the classical theta correspondence to higher degree metaplectic covers. The key issue here is that for higher degree covers there is no analogue of the Weil representation (or even a minimal representation), so additional ingredients are needed. Joint work with David Ginzburg.

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

MondayJul 20, 202016:00
Seminar in Geometry and Topology
Speaker:Erik LundbergTitle:Limit cycle enumeration for random vector fieldsAbstract:opens in new windowin html    pdfopens in new windowZoom link: https://weizmann.zoom.us/j/94594972747?pwd=cWl0bjB1T2ZYSUp0dzdaK3NNa3Mzdz09

The second part of Hilbert's sixteenth problem asks for a study of the number and relative positions of the limit cycles of an ODE system associated to a planar vector field with polynomial component functions. Seeking a probabilistic perspective on Hilbert's problem, we present recent results on the distribution of limit cycles when the vector field component functions are random polynomials.  We present a lower bound for the average number of limit cycles for the Kostlan-Shub-Smale model, and we present asymptotic results for a special class of limit cycle enumeration problems concerning a randomly perturbed center focus.  We will discuss some ideas from the proofs which combine techniques from dynamical systems (such as transverse annuli, Poincare maps, and Melnikov functions) with those coming from the theory of random analytic functions (such as real zeros of random series, the Kac-Rice formula, and the barrier construction from the study of nodal sets of random eigenfunctions).  We conclude by discussing future directions and open problems.

Relevant text: preprint on arXiv

 

Zoom link: https://weizmann.zoom.us/j/94594972747?pwd=cWl0bjB1T2ZYSUp0dzdaK3NNa3Mzdz09
 

 

ThursdayJul 16, 202017:00
Seminar in Geometry and Topology
Speaker:Pavao Mardešić Title:Infinitesimal Center Problem on zero cycles and the composition conjectureAbstract:opens in new windowin html    pdfopens in new windowZoom link: https://us02web.zoom.us/j/88255165329

I will present a joint work with A. Alvarez, J.L. Bravo and C. Christopher. 
We study the analogue of the classical infinitesimal center problem in the plane, but for zero cycles. We define the displacement function in this context and prove that it is identically zero if and only if the deformation has a composition factor. That is, we prove that here the composition conjecture is true, in contrast with the tangential center problem on zero cycles. Finally, we give examples of applications of our results.

Zoom link: https://us02web.zoom.us/j/88255165329

WednesdayJul 15, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Wee Teck Gan Title:Transfer of characters under the Howe duality correspondenceAbstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

Let Sp(W) x O(V) be a dual reductive pair. If  pi is an irreducible representation of Sp(W) say, then one may consider its theta lift \Theta(\pi) on O(V). In this talk, we discuss how the  Harish-Chandra characters of \pi and \theta(\pi) are related (when the representation of the smaller group is tempered).

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

WednesdayJul 08, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Dmitry Gourevitch Title:Nilpotent orbits associated to distinguished representations of reductive groups.Abstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

Let G be a reductive group over a local field F of characteristic zero, and H be a spherical subgroup. An irreducible representation of G is said to be distinguished by H if it has an H-invariant continuous linear functional. The study of distinguished representations is of much current interest, because of their relation to the Plancherel measure on G/H and to periods of automorphic forms.
While a complete classification seems to be out of reach, we established simple micro-local necessary conditions for distinction. The conditions are formulated in terms of the nilpotent orbits associated to the representation, in the spirit of the orbit method. Our results are strongest for Archimedean F. In this case, Rossmann showed that for any irreducible Casselman-Wallach representation, the Zariski closure of the wave-front set is the closure of a unique nilpotent complex orbit. We have shown that the restriction of this orbit to the complexified Lie algebra of H includes zero.
We apply this result to symmetric pairs, branching problems, and parabolic induction. We also have a twisted version for the case when π has a functional invariant with respect to an "additive" character of H. As an application of our theorem we derive necessary conditions for the existence of Rankin-Selberg, Bessel, Klyachko and Shalika models. Our results are compatible with the recent Gan-Gross-Prasad conjectures for non-generic representations. Our necessary conditions are rarely sufficient, but they are sufficient for one class of models: the Klyachko models for unitary representations of general linear groups.

This is a joint work with Eitan Sayag.

ZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

 

MondayJun 29, 202009:15
Seminar in Geometry and Topology
Speaker:Yanir Rubinstein Title:Asymptotic positivityAbstract:opens in new windowin html    pdfopens in new windowlink to seminar: meet.google.com/ipv-hqqo-oys

A general theme in geometry is the classification of algebraic/differential geometric structures which satisfy a positivity property. I will describe an  "asymptotic" version of this theme that lies at the crossroads of algebraic geometry, differential geometry, and convexity. On the algebraic side, we introduce the class of asymptotically log Fano varieties and state a program towards classification in dimension 2, generalizing the classical efforts of the Italian School of Algebraic Geometry, Manin and Hitchin and making contact, somewhat surprisingly, with linear programming.
On the differential side, we explain the resolution of the Calabi-Tian conjecture which implies such a classification classifies Kahler spaces with positive Ricci curvature away from a divisor and with conical singularities near the divisor, and relations to "small angle limits" in Riemannian geometry, stability, non-compact RIcci flat fibrations, Ricci solitons, and Kahler-Einstein edge metrics. 

link to seminar: meet.google.com/ipv-hqqo-oys

ThursdayJun 25, 202012:15
Vision and AI
Speaker:Omry Sendik Title:Unsupervised multi-modal Styled Content GenerationAbstract:opens in new windowin html    pdfopens in new windowZoom Link: https://weizmann.zoom.us/j/91197729259

The emergence of deep generative models has recently enabled the automatic generation of massive amounts of graphical content, both in 2D and in 3D.
Generative Adversarial Networks (GANs) and style control mechanisms, such as Adaptive Instance Normalization (AdaIN), have proved particularly effective in this context, culminating in the state-of-the-art StyleGAN architecture.
While such models are able to learn diverse distributions, provided a sufficiently large training set, they are not well-suited for scenarios where the distribution of the training data exhibits a multi-modal behavior. In such cases, reshaping a uniform or normal distribution over the latent space into a complex multi-modal distribution in the data domain is challenging, and the generator might fail to sample the target distribution well. Furthermore, existing unsupervised generative models are not able to control the mode of the generated samples independently of the other visual attributes, despite the fact that they are typically disentangled in the training data.
In this work, we introduce UMMGAN, a novel architecture designed to better model multi-modal distributions, in an unsupervised fashion. Building upon the StyleGAN architecture, our network learns multiple modes, in a completely unsupervised manner, and combines them using a set of learned weights. We demonstrate that this approach is capable of effectively approximating a complex distribution as a superposition of multiple simple ones.
We further show that UMMGAN effectively disentangles between modes and style, thereby providing an independent degree of control over the generated content.

This is a joint work with Prof. Dani Lischinski and Prof. Daniel Cohen-Or.

Zoom Link: https://weizmann.zoom.us/j/91197729259

WednesdayJun 24, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Kei Yuen Chan Title:Restriction for general linear groups: the local non-tempered Gan-Gross-Prasad conjectureAbstract:opens in new windowin html    pdfopens in new windowZOOM: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

Recently, Gan-Gross-Prasad formulated new restriction problems for the non-tempered representations of classical groups. In this talk, I shall explain a proof of a conjecture on general linear groups over non-Archimedean fields. The main ingredients of the proof include a use of filtration on parabolically induced representations when restricted to the mirabolic subgroups, and realizing the product with a Speh representation as a functor. The proof also uses a result of Lapid- Minguez on the irreducibility of a product of representations. If time permits, we shall also discuss generalizations to Bessel and Fourier-Jacobi models and towards Ext-branching laws.

Zoom link: https://weizmann.zoom.us/j/98304397425

WednesdayJun 17, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Yotam Hendel Title:Singularity properties of convolutions of algebraic morphisms and probabilistic Waring type problemsAbstract:opens in new windowin html    pdfopens in new windowZoom: https://weizmann.zoom.us/j/98304397425

Let G be a connected algebraic group.  
We define and study a convolution operation between algebraic morphisms into G.  We show that this operation yields morphisms with improved singularity properties, and in particular, that under reasonable assumptions one can always obtain a flat morphism with reduced fibers of rational singularities (termed an FRS morphism) after enough convolutions.

The FRS property is of high importance since (FRS) morphisms can be characterized by good asymptotic behaviour of the number of points of their fibers over finite rings of the form Z/p^kZ.
This further allows us to interpret the FRS property through probabilistic lenses.

We discuss some of the above, motivated by the special case of word maps which can be viewed as a relative
analogue in the settings of p-adic groups of Waring's problem from 1770 (see arXiv:1912.12556).

Joint work with Itay Glazer.

Zoom link: https://weizmann.zoom.us/j/98304397425

ThursdayJun 11, 202012:15
Vision and AI
Speaker:Yuval Bahat Title:Explorable Super ResolutionAbstract:opens in new windowin html    pdfopens in new windowZoom meeting: https://weizmann.zoom.us/j/96418208541

Single image super resolution (SR) has seen major performance leaps in recent years. However, existing methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the observed low-resolution (LR) image. These different explanations to the LR image may dramatically vary in their textures and fine details, and may often encode completely different semantic information. In this work, we introduce the task of explorable super resolution. We propose a framework comprising a graphical user interface with a neural network backend, allowing editing the SR output so as to explore the abundance of plausible HR explanations to the LR input. At the heart of our method is a novel module that can wrap any existing SR network, analytically guaranteeing that its SR outputs would precisely match the LR input, when downsampled. Besides its importance in our setting, this module is guaranteed to decrease the reconstruction error of any SR network it wraps, and can be used to cope with blur kernels that are different from the one the network was trained for. We illustrate our approach in a variety of use cases, ranging from medical imaging and forensics, to graphics.

Zoom link: https://weizmann.zoom.us/j/96418208541

WednesdayJun 10, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Stephen Miller Title:Unitarity, Eisenstein series, and Arthur's conjecturesAbstract:opens in new windowin html    pdfopens in new windowZOOM MEETING: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

Jim Arthur has conjectured the existence of some exotic "unipotent" representations of real reductive Lie groups, which are expected to form building blocks of the unitary dual. Though falling short of a full classification of the unitary dual itself, Arthur's conjectures touch on the essence of some of the most difficult questions concerning unitarity. In another direction, automorphic realizations of these representations are expected to have delicate arithmetic properties.

However, Arthur's unipotent representations are hard to identify, much show are unitary. I'll present a status report, including the unitary of the "Langlands element" Arthur describes directly (in joint work with Joe Hundley), and the full identification the unipotent representations for exceptional real groups (joint work with Jeff Adams, Marc van Leeuwen, Annegret Paul, and David Vogan).

Zoom meeting: https://weizmann.zoom.us/j/98304397425

FridayJun 05, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Gal Dor Title:Algebraic structures on automorphic L-functionsAbstract:opens in new windowin html    pdfopens in new windowZoom meeting: https://weizmann.zoom.us/j/98304397425

Consider the function field $F$ of a smooth curve over $\FF_q$, with $q\neq 2$.

L-functions of automorphic representations of $\GL(2)$ over $F$ are important objects for studying the arithmetic properties of the field $F$. Unfortunately, they can be defined in two different ways: one by Godement-Jacquet, and one by Jacquet-Langlands. Classically, one shows that the resulting L-functions coincide using a complicated computation.

I will present a conceptual proof that the two families coincide, by categorifying the question. This correspondence will necessitate comparing two very different sets of data, which will have significant implications for the representation theory of $\GL(2)$. In particular, we will obtain an exotic symmetric monoidal structure on the category of representations of $\GL(2)$

It turns out that an appropriate space of automorphic functions is a commutative algebra with respect to this symmetric monoidal structure. I will outline this construction, and show how it can be used to construct a category of automorphic representations.

Zoom meeting: https://weizmann.zoom.us/j/98304397425

ThursdayJun 04, 202012:15
Vision and AI
Speaker:Tali Treibitz Title:A Method For Removing Water From Underwater ImagesAbstract:opens in new windowin html    pdfopens in new windowZoom meeting: https://weizmann.zoom.us/j/99608045732

Robust recovery of lost colors in underwater images remains a challenging problem. We recently showed that this was partly due to the prevalent use of an atmospheric image formation model for underwater images and proposed a physically accurate model. The revised model showed: 1) the attenuation coefficient of the signal is not uniform across the scene but depends on object range and reflectance, 2) the coefficient governing the increase in backscatter with distance differs from the signal attenuation coefficient. Here, we present the first method that recovers color with our revised model, using RGBD images. The Sea-thru method estimates backscatter using the dark pixels and their known range information. Then, it uses an estimate of the spatially varying illuminant to obtain the range-dependent attenuation coefficient. Using more than 1,100 images from two optically different water bodies, which we make available, we show that our method with the revised model outperforms those using the atmospheric model. Consistent removal of water will open up large underwater datasets to powerful computer vision and machine learning algorithms, creating exciting opportunities for the future of underwater exploration and conservation. (Paper published in CVPR 19).

Zoom link: https://weizmann.zoom.us/j/99608045732

WednesdayJun 03, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Shachar Carmeli Title:A relative de Rham theorem for Nash Submersions Abstract:opens in new windowin html    pdfopens in new windowZOOM MEETING: HTTPS://WEIZMANN.ZOOM.US/J/98304397425

For a Nash manifold X and a Nash vector bundle E on X, one can form the topological vector space of Schwartz sections of E, i.e. the smooth sections which decay fast along with all derivatives. It was shown by Aizenbud and Gourevitch, and independently by Luca Prelli, that for a Nash manifold X, th complex of Schwartz sections of the de Rham complex of X has cohomologies isomorphic to the compactly supported cohomologies of X.

In my talk I will present a work in progress, joint with Avraham Aizenbud, to generalize this result to the relative case, replacing the Nash manifold M with a Nash submersion f:M-->N. Using infinity categorical methods, I will define the notion of a Schwartz section of a Nash bundle E over a complex of sheaves with constructible cohomologies, generalizing the notion of Schwartz section on an open semialgebraic set. I will then relate the Schwartz sections of the relative de Rham complex of a Nash submersion f:M-->N with the Schwartz functions on N over the derived push-forward with proper support of the constant sheaf on M. Finally, I will coclude with some applications to the relation between the Schwartz sections of the relative de Rham complex and the topology of the fibers of f.

Zoom meeting: https://weizmann.zoom.us/j/98304397425

WednesdayMay 27, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Iordan Ganev Title:Beilinson-Bernstein localization via wonderful asymptotics.Abstract:opens in new windowin html    pdfopens in new windowZoom meeting: https://weizmann.zoom.us/j/98304397425

We explain how a doubled version of the Beilinson-Bernstein localization functor can be understood using the geometry of the wonderful compactification of a group. Specifically, bimodules for the Lie algebra give rise to monodromic D-modules on the horocycle space, and to filtered D-modules on the group that respect a certain matrix coefficients filtration. These two categories of D-modules are related via an associated graded construction in a way compatible with localization, Verdier specialization, the Vinberg semigroup, and additional structures. This talk is based on joint work with David Ben-Zvi.

Zoom meeting: https://weizmann.zoom.us/j/98304397425

MondayMay 25, 202017:30
Faculty Seminar
Speaker:Rotem Arnon-Friedman Title:Randomness extraction and amplification in the quantum worldAbstract:opens in new windowin html    pdfopens in new windowZoom meeting: https://weizmann.zoom.us/j/99368552173

Randomness is an essential resource in computer science. In most applications perfect, and sometimes private, randomness is needed, while it is not even clear that such a resource exists. It is well known that the tools of classical computer science do not allow us to create perfect randomness from a single weak source of randomness - a fact that initiated the study of special functions called randomness extractors.
The first part of the talk will be focused on the performance of randomness extractors in the presence of a quantum adversary and present a new model for two-source extractors, termed the quantum Markov model.
In the second part of the talk I will describe a task called "randomness amplification and privatization", where a single and public Santha-Vazirani source of randomness is being transformed into uniform and private randomness - a cryptographic task that cannot be accomplished using the tools of classical computer science. I will then present a protocol that builds on the extractors discussed in the first part of the talk and explain the main ingredients of its security proof.

Zoom meeting : https://weizmann.zoom.us/j/99368552173

WednesdayMay 20, 202016:30
Algebraic Geometry and Representation Theory Seminar
Speaker:Raphael Beuzart-PlessisTitle:Isolation of the cuspidal spectrum and application to the Gan-Gross-Prasad conjecture for unitary groups.Abstract:opens in new windowin html    pdfopens in new windowZoom meeting https://weizmann.zoom.us/j/98304397425

In this talk, I will explain a new way to construct smooth convolution operators on adelic groups that isolate (certain) cuspidal representations from the rest of the automorphic spectrum. Then, I will explain an application of this construction to the global Gan-Gross-Prasad conjecture for unitary groups.
This is joint work with Yifeng Liu, Wei Zhang and Xinwen Zhu.

Zoom meeting https://weizmann.zoom.us/j/98304397425 

ThursdayMar 05, 202013:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Emanuel Milman (Technion) & Tal Orenshtein (TU Berlin)Title:DOUBLE SEMINARAbstract:opens in new windowin html    pdfopens in new window

Speaker #1: Emanuel Milman (Technion)

Title: Functional Inequalities on sub-Riemannian manifolds via QCD

Abstract:We are interested in obtaining Poincar\'e and log-Sobolev inequalities on domains in sub-Riemannian manifolds (equipped with their natural sub-Riemannian metric and volume measure).
It is well-known that strictly sub-Riemannian manifolds do not satisfy any type of Curvature-Dimension condition CD(K,N), introduced by Lott-Sturm-Villani some 15 years ago, so we must follow a different path. We show that while ideal (strictly) sub-Riemannian manifolds do not satisfy any type of CD condition, they do satisfy a quasi-convex relaxation thereof, which we name QCD(Q,K,N). As a consequence, these spaces satisfy numerous functional inequalities with exactly the same quantitative dependence (up to a factor of Q) as their CD counterparts. We achieve this by extending the localization paradigm to completely general interpolation inequalities, and a one-dimensional comparison of QCD densities with their "CD upper envelope". We thus obtain the best known quantitative estimates for (say) the L^p-Poincar\'e and log-Sobolev inequalities on domains in the ideal sub-Riemannian setting, which in particular are independent of the topological dimension. For instance, the classical Li-Yau / Zhong-Yang spectral-gap estimate holds on all Heisenberg groups of arbitrary dimension up to a factor of 4.
No prior knowledge will be assumed, and we will (hopefully) explain all of the above notions during the talk.

Speaker #2: Tal Orenshtein (TU Berlin)

Title: Rough walks in random environment
Abstract. In this talk we shall review scaling limits for random walks in random environment lifted to the rough path space to the enhanced Brownian motion. Except for the immediate application to SDEs, this adds some new information on the structure of the limiting path. Time permitted, we shall elaborate on the tools to tackle these problems. Based on joint works with Olga Lopusanschi, with Jean-Dominique Deuschel and Nicolas Perkowski and with Johaness Bäumler, Noam Berger and Martin Slowik.

TuesdayMar 03, 202011:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Maria GorelikTitle:Multiplicities for DS(L) in osp-type.Abstract:opens in new windowin html    pdfopens in new window

The Dulfo-Serganova functor  is a cohomology functor relating representation theory of Lie superalgebras of different ranks. 
This is a tensor functor preserving superdimension. 
Serganova conjectured that the image of a finite-dimensional simple module L under Duflo-Serganova functor is semisimple. Heidersdorf and Weissauer established this conjecture for gl-case and described DS(L). In my previous talk  I sketched a proof of semsimplicity for  osp-type. In this talk I will explain how to compute the mulitplicites in DS(L).
This is a joint project with Thorsten Heidersdorf.  
 

TuesdayFeb 25, 202011:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Dmitry GourevitchTitle:Geometric restrictions on nilpotent orbits associated to distinguished representations of reductive groups.Abstract:opens in new windowin html    pdfopens in new window

Let G be a reductive group over a local field, and H be a spherical subgroup. An irreducible representation of G is said to be distinguished by H if it has an H-invariant continuous linear functional. The study of distinguished representations is of much current interest, because of their relation to the Plancherel measure on G/H and to periods of automorphic forms.

While a complete classification seems to be out of reach, in a joint work with E. Sayag we established simple geometric necessary conditions for distinction. The conditions are formulated in terms of the nilpotent orbit associated to the representation. In the talk I will focus on the case of real reductive G, based on the recent preprint arXiv:2001.11746. Our main tool is the theory of associated varieties of modules over the Lie algebra of G.

ThursdayFeb 13, 202013:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Anirban Basak Title:Upper tail large deviations of subgraph counts in sparse random graphsAbstract:opens in new windowin html    pdfopens in new window

For a $\Delta$-regular connected graph ${\sf H}$ the problem of determining the upper tail large deviation for the number of copies of ${\sf H}$ in an Erd\H{o}s-R\'{e}nyi graph on $n$ vertices with edge probability $p$ has generated significant interests. In the sparse regime, i.e. for $p \ll 1$, when $np^{\Delta/2} \gg (\log n)^{1/(v_{\sf H}-2)}$, where $v_{\sf H}$ is the number of vertices in ${\sf H}$, the upper tail large deviation event is believed to occur due to the presence of localized structures. Whereas, for $p$ below the above threshold the large deviation is expected to be given by that of a Poisson random variable. In this talk, we will discuss some progress in resolving this conjecture.

This is based on joint work with Riddhipratim Basu.

MondayFeb 03, 202011:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Ron Rothblum Title:Local Proofs Approaching the Witness Length Abstract:opens in new windowin html    pdfopens in new window

Interactive oracle proofs (IOPs) are a hybrid between interactive proofs and PCPs. In an IOP the prover is allowed to interact with a verifier (like in an interactive proof) by sending relatively long messages to the verifier, who in turn is only allowed to query a few of the bits that were sent (like in a PCP).

For any NP relation for which membership can be decided in polynomial-time and bounded polynomial space (e.g., SAT, Hamiltonicity, Clique, Vertex-Cover, etc.) and for any constant \gamma>0, we construct an IOP with communication complexity (1+\gamma) \cdot n, where n is the original witness length. The number of rounds as well as the number of queries made by the IOP verifier are constant.

Joint work with Noga Ron-Zewi

ThursdayJan 30, 202013:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Amir Yehudayoff Title:Anti-concentration of inner productsAbstract:opens in new windowin html    pdfopens in new window

The plan is to discuss anti-concentration of the inner product between two independent random vectors in Euclidean space. We shall go over some background and applications, and see some proofs.

ThursdayJan 30, 202012:15
Vision and AIRoom 1
Speaker:Oshri Halimi Title:Handling the Unknown with Non-Rigid Geometric InvariantsAbstract:opens in new windowin html    pdfopens in new window

My goal is to demonstrate how geometric priors can replace the requirement for annotated 3D data. In this context I'll present two of my works. First, I will present a deep learning framework that estimates dense correspondence between articulated 3D shapes without using any ground truth labeling which I presented as an oral presentation at CVPR 2019: "Unsupervised Learning of Dense Shape Correspondence". We demonstrated that our method is applicable to full and partial 3D models as well as to realistic scans. The problem of incomplete 3D data can be encountered also in many other different scenarios. One such interesting problem of great practical importance is shape completion, to which I'll dedicate the second part of my lecture.
It is common to encounter situations where there is a considerable distinction between the scanned 3D model and the final rendered one. The distinction can be attributed to occlusions or directional view of the target, when the scanning device is localized in space. I will demonstrate that geometric priors can guide learning algorithms in the task of 3D model completion from partial observation. To this end, I'll present our recent work: "The Whole is Greater than the Sum of its Non-Rigid Parts". This famous declaration of Aristotle was adopted to explain human perception by the Gestalt psychology school of thought in the twentieth century. Here, we claim that observing part of an object which was previously acquired as a whole, one could deal with both partial matching and shape completion in a holistic manner. More specifically, given the geometry of a full, articulated object in a given pose, as well as a partial scan of the same object in a different pose, we address the problem of matching the part to the whole while simultaneously reconstructing the new pose from its partial observation. Our approach is data-driven, and takes the form of a Siamese autoencoder without the requirement of a consistent vertex labeling at inference time; as such, it can be used on unorganized point clouds as well as on triangle meshes. We demonstrate the practical effectiveness of our model in the applications of single-view deformable shape completion and dense shape correspondence, both on synthetic and real-world geometric data, where we outperform prior work on these tasks by a large margin.

WednesdayJan 29, 202013:00
Computer Science SeminarRoom 1
Speaker:Yuval DaganTitle:Interaction is necessary for distributed learning with privacy or communication constraintsAbstract:opens in new windowin html    pdfopens in new windowSPECIAL FOUNDATIONS OF COMPUTER SCIENCE SEMINAR

Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest. Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed in \citep{SmithTU17,DanielyF18}. Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive \emph{statistical queries}.

Link: https://arxiv.org/abs/1911.04014

 

 

SundayJan 26, 202011:15
Faculty SeminarRoom 155
Speaker:Meirav Zehavi Title:On Parameterized Analysis and the Disjoint Paths ProblemAbstract:opens in new windowin html    pdfopens in new window

Parameterized Anaylsis leads both to deeper understanding of intractability results and to practical solutions for many NP-hard problems. Informally speaking, Parameterized Analysis is a mathematical paradigm to answer the following fundamental question: What makes an NP-hard problem hard? Specifically, how do different parameters (being formal quantifications of structure) of an NP-hard problem relate to its inherent difficulty? Can we exploit these relations algorithmically, and to which extent? Over the past three decades, Parameterized Analysis has grown to be a mature field of outstandingly broad scope with significant impact from both theoretical and practical perspectives on computation.

In this talk, I will first give a brief introduction of the field of Parameterized Analysis. Then, I will discuss some recent work in this field, where I mainly address (i) problems at the core of this field, rooted at Graph Minors and Graph Modification Problems, and (ii) applications of tools developed in (i) in particular, and of parameterized analysis in general, to Computational Geometry and Computational Social Choice. Additionally, I will zoom into a specific result, namely, the first single-exponential time parameterized algorithm for the Disjoint Paths problem on planar graphs. An efficient algorithm for the Disjoint Paths problem in general, and on "almost planar" graphs in particular, is a critical part in the quest for the establishment of an Efficient Graph Minors Theory. As the Graph Minors Theory is the origin of Parameterized Analysis and ubiquitous in the design of parameterized algorithms, making this theory efficient is a holy grail in the field.

ThursdayJan 23, 202013:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:James Lee Title:Metric entropies, L_1 transportation, and competitive analysisAbstract:opens in new windowin html    pdfopens in new window

The MTS problem (Borodon, Linial, and Saks 1992) is a general model for the analysis of algorithms that optimize in the presence of information arriving over time, where the state space is equipped with a metric. I will discuss a relatively new approach to this area, where both the algorithm and method of analysis are derived canonically from a choice of a "regularizer" on the probability simplex. The regularizer can be interpreted as a Riemannian structure on the simplex, and then the algorithm is simply a gradient flow.

Defining the regularizer as an appropriate "noisy" multiscale metric entropy (similar to, e.g., the Talagrand \gamma_1 functional) yields the best-known competitive ratio for every metric space. For ultrametrics (aka, "HST metrics"), this achieves the conjectured bound, but the algorithm falls short of resolving the MTS conjecture for many other spaces, including subsets of the real line.

ThursdayJan 23, 202012:15
Vision and AIRoom 1
Speaker:Oren Dovrat Title:Learning to SampleAbstract:opens in new windowin html    pdfopens in new window

Processing large point clouds is a challenging task. Therefore, the data is often sampled to a size that can be processed more easily. The question is how to sample the data? A popular sampling technique is Farthest Point Sampling (FPS). However, FPS is agnostic to a downstream application (classification, retrieval, etc.). The underlying assumption seems to be that minimizing the farthest point distance, as done by FPS, is a good proxy to other objective functions.

We show that it is better to learn how to sample. To do that, we propose a deep network to simplify 3D point clouds. The network, termed S-NET, takes a point cloud and produces a smaller point cloud that is optimized for a particular task. The simplified point cloud is not guaranteed to be a subset of the original point cloud. Therefore, we match it to a subset of the original points in a post-processing step. We contrast our approach with FPS by experimenting on two standard data sets and show significantly better results for a variety of applications. Our code is publicly available.

TuesdayJan 21, 202011:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Valentin Buciumas Title:A bridge between p-adic and quantum group representations via Whittaker coinvariants.Abstract:opens in new windowin html    pdfopens in new window

Unramified principal series representations of p-adic GL(r) and its metaplectic covers are important in the theory of automorphic forms. I will present a method of relating the Whittaker coinvariants of such a representation with representations of quantum affine gl_n. This involves using a Schur-Weyl duality result due to Chari and Pressley and it allows us to compute the dimension of the Whittaker model of every irreducible smooth representation with Iwahori fixed vectors.

If time permits I will explain a conjectured version of this result for the symplectic group Sp(2r) which involves quantum symmetric pairs.

MondayJan 20, 202011:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Katrina LigettTitle:A New Analysis of Differential Privacy’s Generalization GuaranteesAbstract:opens in new windowin html    pdfopens in new window

Many data analysis pipelines are adaptive: the choice of which analysis to run next depends on the outcome of previous analyses. Common examples include variable selection for regression problems and hyper-parameter optimization in large-scale machine learning problems: in both cases, common practice involves repeatedly evaluating a series of models on the same dataset. Unfortunately, this kind of adaptive re-use of data invalidates many traditional methods of avoiding over-fitting and false discovery, and has been blamed in part for the recent flood of non-reproducible findings in the empirical sciences. An exciting line of work beginning with Dwork et al. 2015 establishes the first formal model and first algorithmic results providing a general approach to mitigating the harms of adaptivity, via a connection to the notion of differential privacy. Unfortunately, until now, those results were primarily of information theoretic interest, only beating out the simple approach of gathering fresh data for every computation ("sample-splitting") at the scale of many millions of datapoints.

In this work, we give a new proof of the transfer theorem that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and gives structural insights that we expect will be useful elsewhere. We show: 1) that differential privacy ensures that the expectation of any query on the conditional distribution on datasets induced by the transcript of the interaction is close to its expectation on the data distribution, and 2) sample accuracy on its own ensures that any query answer produced by the mechanism is close to the expectation of the query on the conditional distribution. This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the conditional distribution after the mechanism has committed to its answers. The transfer theorem then follows by summing these two bounds, and in particular, avoids the "monitor argument" used to derive high probability bounds in prior work.

An upshot of our new proof technique is that the concrete bounds we obtain are substantially better than the best previously known bounds, even though the improvements are in the constants, rather than the asymptotics (which are known to be tight). As we show, our new bounds outperform the naive "sample-splitting" baseline at dramatically smaller dataset sizes compared to the previous state of the art, bringing techniques from this literature closer to practicality.

Joint work with: Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi (UPenn), and Moshe Shenfeld (HUJI).

This work appeared at ITCS 2020.

SundayJan 19, 202013:30
Computer Science SeminarRoom 155
Speaker:Jonathan ShaferTitle:Learning vs. VerifyingAbstract:opens in new windowin html    pdfopens in new windowspecial foundations of computer science seminar

This talk will address the following question: In what cases does learning a good hypothesis require more resources than verifying a hypothesis proposed by someone else?** If verification is significantly cheaper than learning, that could have important practical implications for delegation of machine learning tasks in commercial settings, and might also have consequences for verification of scientific publications, and for AI safety. Two results will be discussed: (1) There exists a learning problem where verification requires quadratically less random samples than are required for learning. (2) The broad class of Fourier-sparse functions (which includes decision trees, for example) can be efficiently verified using random samples only, even though it is widely believed to be impossible to efficiently learn this class from random samples alone. 
This is joint work with Shafi Goldwasser (UC Berkeley), Guy Rothblum (WIS), and Amir Yehudayoff (Technion).  

** "Learning" -- as defined in Valiant's Probably Approximately Correct (PAC) framework

SundayJan 19, 202011:15
Faculty SeminarRoom 155
Speaker:Oded PadonTitle:Verification of distributed protocols using decidable logicsAbstract:opens in new windowin html    pdfopens in new window

Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. I will describe a series of works that develop a methodology for verifying distributed algorithms and systems using decidable logics, employing decomposition, abstraction, and user interaction. This methodology is implemented in an open-source tool, and has resulted in the first mechanized proofs of several important distributed protocols. I will also describe a novel approach to the problem of invariant inference based on a newly formalized duality between reachability and mathematical induction. The duality leads to a primal-dual search algorithm, and a prototype implementation already handles challenging examples that other state-of-the-art techniques cannot handle. I will briefly describe several other works, including a new optimization technique for deep learning computations that achieves significant speedups relative to existing deep learning frameworks.

ThursdayJan 16, 202012:15
Vision and AIRoom 1
Speaker:Idan Schwartz Title:Factor Graph AttentionAbstract:opens in new windowin html    pdfopens in new window

Dialog is an effective way to exchange information, but subtle details and nuances are extremely important. While significant progress has paved a path to address visual dialog with algorithms, details and nuances remain a challenge. Attention mechanisms have demonstrated compelling results to extract details in visual question answering and also provide a convincing framework for visual dialog due to their interpretability and effectiveness. However, the many data utilities that accompany visual dialog challenge existing attention techniques. We address this issue and develop a general attention mechanism for visual dialog which operates on any number of data utilities. To this end, we design a factor graph based attention mechanism which combines any number of utility representations. We illustrate the applicability of the proposed approach on the challenging and recently introduced VisDial datasets, outperforming recent state-of-the-art methods by 1.1% for VisDial0.9 and by 2% for VisDial1.0 on MRR. Our ensemble model improved the MRR score on VisDial1.0 by more than 6%.

ThursdayJan 16, 202011:00
Faculty SeminarSchmidt Auditorium
Speaker:Alon Kipnis Title:Higher Criticism for discriminating frequency-tables and testing authorshipAbstract:opens in new windowin html    pdfopens in new windowNOTE UNUSUAL DAY AND PLACE

The Higher Criticism (HC) test is a useful tool for detecting the presence of a signal spread across a vast number of features, especially in the sparse setting when only few features are useful while the rest are pure noise. We adapt the HC test to the two-sample setting of detecting changes between two frequency tables. We apply this adaptation to authorship attribution challenges, where the goal is to identify the author of a document using other documents whose authorship is known. The method is simple yet performs well without handcrafting and tuning. Furthermore, as an inherent side effect, the HC calculation identifies a subset of discriminating words, which allow additional interpretation of the results. Our examples include authorship in the Federalist Papers, machine-generated texts, and the identity of the creator of the Bitcoin.

We take two approaches to analyze the success of our method. First, we show that, in practice, the discriminating words identified by the test have low variance across documents belonging to a corpus of homogeneous authorship. We conclude that in testing a new document against the corpus of an author, HC is mostly affected by words characteristic of that author and is relatively unaffected by topic structure. Finally, we analyze the power of the test in discriminating two multinomial distributions under a sparse and weak perturbation model. We show that our test has maximal power in a wide range of the model parameters, even though these parameters are unknown to the user.

WednesdayJan 15, 202011:15
Algebraic Geometry and Representation Theory SeminarRoom A
Speaker:Pavel Eringof Title:The Frobenius functor for symmetric tensor categories in positive characteristic.Abstract:opens in new windowin html    pdfopens in new windowNOTE UNUSUAL PLACE: Feinberg Graduate School, ROOM A

An important role in modular representation theory is played by the Frobenius twist functor, twisting the k-linear structure of a representation by the Frobenius automorphism F(a)=a^p of the (algebraically closed) ground field k of characteristic p. I will define an analog of this functor for any symmetric tensor category of characteristic p. One of the main new features is that unlike the classical Frobenius twist functor, this functor need not be left or right exact. I will give examples when it is not and describe a replacement of the exactness property. I will also describe applications of this notion to formulating and proving analogs of Deligne's theorem in positive characteristic. This is joint work with V. Ostrik.

MondayJan 13, 202011:15
Computer Science SeminarRoom 155
Speaker:Lijie Chen Title:Strong Average-Case Circuit Lower Bounds from Non-trivial DerandomizationAbstract:opens in new windowin html    pdfopens in new windowSPECIAL THEORY SEMINAR

We prove that for all constants a, NQP = NTIME[n^polylog(n)] cannot be (1/2 + 2^(-log^a n) )-approximated by 2^(log^a n)-size ACC^0 of THR circuits (ACC^0 circuits with a bottom layer of threshold gates). Previously, it was even open whether E^NP can be (1/2+1/sqrt(n))-approximated by AC^0[2] circuits.
   
More generally, we establish a connection showing that, for a typical circuit class C, non-trivial nondeterministic CAPP algorithms imply strong (1/2 + 1/n^{omega(1)}) average-case lower bounds for nondeterministic time classes against C circuits. The existence of such (deterministic) algorithms is much weaker than the widely believed conjecture PromiseBPP = PromiseP.

Our new results build on a line of recent works, including [Murray and Williams, STOC 2018], [Chen and Williams, CCC 2019], and [Chen, FOCS 2019]. In particular, it strengthens the corresponding (1/2 + 1/polylog(n))-inapproximability average-case lower bounds in [Chen, FOCS 2019]. The two important technical ingredients are techniques from Cryptography in NC^0 [Applebaum et al., SICOMP 2006], and Probabilistic Checkable Proofs of Proximity with NC^1-computable proofs.  

MondayJan 13, 202009:15
Seminar in Geometry and TopologyRoom 155
Speaker:Beatriz Molina SamperTitle: Newton non-degenerate codimension one foliations and blowing-upsAbstract:opens in new windowin html    pdfopens in new window

 

In this talk we introduce Newton non-degenerate codimension one foliations, that are given in terms of Newton polyhedra following the classical ideas of Kouchnirenko for hypersurfaces. We characterize Newton non-degenerate foliations in geometrical terms, as being foliations admitting a reduction of singularities of a combinatorial nature, the ones that we call (weak) toric type foliations. In addition, we provide a positive answer to Thom's question about the existence of invariant hypersurface for germs of codimension one foliations, in the three-dimensional case of toric type. In order to do it, we pass through a study of global character in dimension two. The Khovanskii-Kouchnirenko-Bernstein result that relates the number of solutions of a Laurent polynomial system in terms of the mixed volume of the associated polyhedra, leads us to conclude that isolated branches of Newton non-degenerate foliations defined over projective toric surfaces extend in a global way. This prolongation property is the key to spread the invariant surfaces in dimension three through the compact dicritical components.

MondayJan 13, 202009:15
Seminar in Geometry and TopologyRoom 155
Speaker:Beatriz Molina SamperTitle: Newton non-degenerate codimension one foliations and blowing-upsAbstract:opens in new windowin html    pdfopens in new window

 

In this talk we introduce Newton non-degenerate codimension one foliations, that are given in terms of Newton polyhedra following the classical ideas of Kouchnirenko for hypersurfaces. We characterize Newton non-degenerate foliations in geometrical terms, as being foliations admitting a reduction of singularities of a combinatorial nature, the ones that we call (weak) toric type foliations. In addition, we provide a positive answer to Thom's question about the existence of invariant hypersurface for germs of codimension one foliations, in the three-dimensional case of toric type. In order to do it, we pass through a study of global character in dimension two. The Khovanskii-Kouchnirenko-Bernstein result that relates the number of solutions of a Laurent polynomial system in terms of the mixed volume of the associated polyhedra, leads us to conclude that isolated branches of Newton non-degenerate foliations defined over projective toric surfaces extend in a global way. This prolongation property is the key to spread the invariant surfaces in dimension three through the compact dicritical components.

SundayJan 12, 202011:00
Faculty SeminarRoom 155
Speaker:Yaniv Romano Title:Reliability, Equity, and Reproducibility in Modern Machine Learning Abstract:opens in new windowin html    pdfopens in new windowPLEASE NOTE THE UNUSUAL TIME

Modern machine learning algorithms have achieved remarkable performance in a myriad of applications, and are increasingly used to make impactful decisions in the hiring process, criminal sentencing, healthcare diagnostics and even to make new scientific discoveries. The use of data-driven algorithms in high-stakes applications is exciting yet alarming: these methods are extremely complex, often brittle, notoriously hard to analyze and interpret. Naturally, concerns have raised about the reliability, fairness, and reproducibility of the output of such algorithms. This talk introduces statistical tools that can be wrapped around any "black-box" algorithm to provide valid inferential results while taking advantage of their impressive performance. We present novel developments in conformal prediction and quantile regression, which rigorously guarantee the reliability of complex predictive models, and show how these methodologies can be used to treat individuals equitably. Next, we focus on reproducibility and introduce an operational selective inference tool that builds upon the knockoff framework and leverages recent progress in deep generative models. This methodology allows for reliable identification of a subset of important features that is likely to explain a phenomenon under-study in a challenging setting where the data distribution is unknown, e.g., mutations that are truly linked to changes in drug resistance.

ThursdayJan 09, 202013:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Abel FarkasTitle:The Geometric measure theory of the Brownian pathAbstract:opens in new windowin html    pdfopens in new window

Let B denote the range of the Brownian motion in R^d. For a deterministic Borel a measure nu we wish to find a random measure mu such that the support of mu is contained in B and the expectation of mu is nu. We discuss when exactly can there be such a random measure and construct in those cases. We establish a formula for the expectation of the double integral with respect to mu, which is a strong tool for the geometric measure theory of the Brownian path.

ThursdayJan 09, 202012:15
Vision and AIRoom 1
Speaker:Daphna WeinshallTitle:Beyond Accuracy: Neural Networks Show Similar Learning Dynamics Across ArchitecturesAbstract:opens in new windowin html    pdfopens in new window

One of the unresolved questions in deep learning is the nature of the solutions that are being discovered. We investigated the collection of solutions reached by the same neural network (NN) architecture, with different random initialization of weights and random mini-batches. These solutions are shown to be rather similar -- more often than not, each train and test example is either classified correctly by all NN instances, or by none at all. Furthermore, all NNs seem to share the same learning dynamics, whereby initially the same train and test examples are correctly recognized by the learned model, followed by other examples that are learned in roughly the same order. When extending the investigation to heterogeneous collections of NN architectures, once again examples are seen to be learned in the same order irrespective of architecture, although the more powerful architecture may continue to learn and thus achieve higher accuracy. Finally, I will discuss cases where this pattern of similarity breaks down, which show that the reported similarity is not an artifact of optimization by gradient descent.

WednesdayJan 08, 202011:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Yair CarmonTitle:Fundamental limits of modern machine learning and how to get around themAbstract:opens in new windowin html    pdfopens in new window

This talk presents new computational and statistical barriers in machine learning, along with the algorithmic developments that they inspire.

The computational barriers arise in nonconvex optimization: we prove lower bounds on the (oracle) complexity of finding stationary points using (stochastic) gradient methods, showing that gradient descent is unimprovable for a natural class of problems. We bypass this barrier by designing an algorithm that outperforms gradient descent for a large subclass of problems with high-order smoothness. Our algorithm leverages classical momentum techniques from convex optimization using a "convex until proven guilty" principle that we develop.

The statistical barrier is the large amount of data required for adversarially robust learning. In a Gaussian model, we prove that unlabeled data allows us to circumvent an information theoretic gap between robust and standard classification. Our analysis directly leads to a general robust self-training procedure; we use it to significantly improve state-of-the-art performance on the challenging and extensively studied CIFAR-10 adversarial robustness benchmark.

TuesdayJan 07, 202011:00
Faculty SeminarRoom 1
Speaker:Vardan PapyanTitle:Traces of Class/Cross-Class Structure Pervade Deep Learning SpectraAbstract:opens in new windowin html    pdfopens in new windowNOTE THE UNUSUAL DAY AND ROOM

Numerous researchers recently applied empirical spectral analysis to the study of modern deep learning classifiers, observing spectral outliers and small but distinct bumps often seen beyond the edge of a "main bulk". This talk presents an important formal class/cross-class structure and shows how it lies at the origin of these visually striking patterns. The structure is shown to permeate the spectra of deepnet features, backpropagated errors, gradients, weights, Fisher Information matrix and Hessian, whether these are considered in the context of an individual layer or the concatenation of them all. The significance of the structure is illustrated by (i) demonstrating empirically that the feature class means separate gradually from the bulk as function of depth and become increasingly more orthogonal, (ii) proposing a correction to KFAC, a well known second-order optimization algorithm for training deepnets, and (ii) proving in the context of multinomial logistic regression that the ratio of outliers to bulk in the spectrum of the Fisher information matrix is predictive of misclassification.

MondayJan 06, 202011:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Dr. Amir AbboudTitle:Some Connections Between the Conjectures in Fine-Grained ComplexityAbstract:opens in new windowin html    pdfopens in new window

Fine-grained complexity utilizes a small set of conjectures to derive conditional lower bounds for a large collection of problems. These conjectures concern the time complexity of a few core problems such as k-SAT, Orthogonal Vectors, 3SUM, k-Clique, and Set Cover.  The relationships between these conjectures are poorly understood.

This talk will discuss some connections between the conjectures, including a tight reduction from Weighted-k Clique to Orthogonal Vectors and new (quantum-inspired) findings about the Set Cover Conjecture.

SundayJan 05, 202009:45
Machine Learning and Statistics SeminarRoom 1
Speaker:Alon GonenTitle:Beyond Worst Case In Machine Learning: The Oracle ModelAbstract:opens in new windowin html    pdfopens in new windowNOTE UNUSUAL TIME AND DAY

In recent years there has been an increasing gap between the success of machine learning algorithms and our ability to explain their success theoretically.

Namely, many of the problems that are solved to a satisfactory degree of precision are computationally hard in the worst case. Fortunately, there are often reasonable assumptions which help us to get around these worst-case impediments and allow us to rigorously analyze heuristics that are used in practice.

In this talk I will advocate a complementary approach, where instead of explicitly characterizing some desired "niceness" properties of the data, we assume access to an optimization oracle that solves a relatively simpler task. This allows us to identify the sources of hardness and extend our theoretical understanding to new domains. Furthermore we will show that seemingly innocents (and arguably justifiable) modifications to the oracle can lead to tractable reductions and even to bypass hardness results.

We demonstrate these ideas using the following results: i) An efficient algorithm for non-convex online learning using an optimization oracle. b) A faster boosting algorithm using a "simple" weak learner. iii) An efficient reduction from online to private learning.

Joint works with Naman Agarwal, Noga Alon, Elad Hazan, and Shay Moran.

SundayJan 05, 202009:45
Machine Learning and Statistics SeminarRoom 1
Speaker:Alon GonenTitle:Beyond Worst Case In Machine Learning: The Oracle ModelAbstract:opens in new windowin html    pdfopens in new window

In recent years there has been an increasing gap between the success of machine learning algorithms and our ability to explain their success theoretically. 

Namely, many of the problems that are solved to a satisfactory degree of precision are computationally hard in the worst case. Fortunately, there are often reasonable assumptions which help us to get around these worst-case impediments and allow us to rigorously analyze heuristics that are used in practice. 

In this talk I will advocate a complementary approach, where instead of explicitly characterizing some desired "niceness" properties of the data, we assume access to an optimization oracle that solves a relatively simpler task. This allows us to identify the sources of hardness and extend our theoretical understanding to new domains. Furthermore we will show that seemingly innocents (and arguably justifiable) modifications to the oracle can lead to tractable reductions and even to bypass hardness results. 

We demonstrate these ideas using the following results: i) An efficient algorithm for non-convex online learning using an optimization oracle. b) A faster boosting algorithm using a "simple" weak learner. iii) An efficient reduction from online to private learning.

Joint works with Naman Agarwal, Noga Alon, Elad Hazan, and Shay Moran.

ThursdayJan 02, 202013:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Dima Dolgopyat Title:Edgeworth expansion in Central Limit TheoremAbstract:opens in new windowin html    pdfopens in new window

It is well known that Central Limit Theorem is very effective giving a reasonable approximation for sums of a quite small number of terms. Edgeworth expansions provide a convenient way to control the error in the Central Limit Theorem. In this talk I will review some recent results on this subject related to the interface between probability and dynamics.  

ThursdayJan 02, 202013:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Dima Dolgopyat Title:Edgeworth expansion in Central Limit TheoremAbstract:opens in new windowin html    pdfopens in new window

It is well known that Central Limit Theorem is very effective giving a reasonable approximation for sums of a quite small number of terms. Edgeworth expansions provide a convenient way to control the error in the Central Limit Theorem. In this talk I will review some recent results on this subject related to the interface between probability and dynamics.  

ThursdayJan 02, 202012:15
Vision and AIRoom 1
Speaker:Gabriel Stanovsky Title:Meaning Representation in Natural Language Tasks Abstract:opens in new windowin html    pdfopens in new windowJOINT Machine Learning and Computer Vision seminar

Recent developments in Natural Language Processing (NLP) allow models to leverage large, unprecedented amounts of raw text, culminating in impressive performance gains in many of the field"s long-standing challenges, such as machine translation, question answering, or information retrieval.

In this talk, I will show that despite these advances, state-of-the-art NLP models often fail to capture crucial aspects of text understanding. Instead, they excel by finding spurious patterns in the data, which lead to biased and brittle performance. For example, machine translation models are prone to translate doctors as men and nurses as women, regardless of context. Following, I will discuss an approach that could help overcome these challenges by explicitly representing the underlying meaning of texts in formal data structures. Finally, I will present robust models that use such explicit representations to effectively identify meaningful patterns in real-world texts, even when training data is scarce.

MondayDec 30, 201911:15
Faculty SeminarSchmidt Auditorium
Speaker:Reviel NetzTitle:What have we Learned from the Archimedes Palimpsest?Abstract:opens in new windowin html    pdfopens in new windowNOTE THE UNUSUAL DAY AND PLACE

The rediscovery of the Archimedes Palimpsest in 1998 was a unique event: one of the most important authors of antiquity was read afresh, with many transformations made in the established text. Twenty years later, what have we learned from the Archimedes Palimpsest? What changed in our picture of the ancient history of mathematics?

Reviel Netz is Pat Suppes Professor of Greek Mathematics and Astronomy at Stanford University. Besides editing the Archimedes Palimpsest, he has published many books and articles on the history of Greek mathematics and on other subjects. His most recent book, "Scale, Space and Canon in Ancient Literary Culture", is now forthcoming from Cambridge University Press.

Refreshments will be served at 11:00am.
 

MondayDec 30, 201909:15
Seminar in Geometry and TopologyRoom 155
Speaker:Serge TabachnikovTitle:Cross-ratio dynamics on ideal polygonsAbstract:opens in new windowin html    pdfopens in new window

Define a relation between labeled ideal polygons in the hyperbolic space by requiring that the complex distances (a combination of the distance and the angle) between their  respective sides equal c; the complex number c is a parameter of the relation. This defines a 1-parameter family of maps on the moduli space of ideal polygons in the hyperbolic space (or, in its real version, in the hyperbolic plane). I shall discuss complete integrability of this family of maps and mention related topics, including Coxeter's frieze patterns and elements of the theory of cluster algebras

ThursdayDec 26, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Max Fathi Title:A new proof of the Caffarelli contraction theoremAbstract:opens in new windowin html    pdfopens in new window

The Caffarelli contraction theorem states that the Brenier optimal transport map sending the Gaussian measure onto a uniformly log-concave probability measure is lipschitz. In this talk, I will present a new proof, using entropic regularization and a variational characterization of lipschitz transport maps due to Gozlan and Juillet. Based on joint work with Nathael Gozlan and Maxime Prod'homme.

ThursdayDec 26, 201912:15
Vision and AIRoom 1
Speaker:Greg Shakhnarovich Title:Pixel Consensus Voting for Panoptic SegmentationAbstract:opens in new windowin html    pdfopens in new window

I will present a new approach for image parsing, Pixel Consensus Voting (PCV). The core of PCV is a framework for instance segmentation based on the Generalized Hough transform. Pixels cast discretized, probabilistic votes for the likely regions that contain instance centroids. At the detected peaks that emerge in the voting heatmap, backprojection is applied to collect pixels and produce instance masks. Unlike a sliding window detector that densely enumerates object proposals, our method detects instances as a result of the consensus among pixel-wise votes. We implement vote aggregation and backprojection using native operators of a convolutional neural network. The discretization of centroid voting reduces the training of instance segmentation to pixel labeling, analogous and complementary to fully convolutional network-style semantic segmentation, leading to an efficient and unified architecture that jointly models things and stuff. We demonstrate the effectiveness of our pipeline on COCO and Cityscapes Panoptic Segmentation and obtain competitive results. This joint work with Haochen Wang (TTIC/CMU), Ruotian Luo (TTIC), Michael Maire (TTIC/Uchicago) received an Innovation Award at the COCO/Mapillary workshop at ICCV 2019.

ThursdayDec 26, 201911:00
Faculty Seminar
Speaker:Ran Ben Basat Title:Leveraging Programmable Switches for In-network ComputingAbstract:opens in new windowin html    pdfopens in new window De Picciotto Building, Meeting Room 25

The network line rate is constantly on the rise to support the exploding amounts of data. This means that we have less time to process individual packets, despite a growing demand for better network telemetry. Moreover, CPU speeds are not rising at the same rate as we near the end of Moore's law, making it harder to rely on software computations. Programmable hardware switches are an emerging technology that enables flexible packet processing while optimizing for throughput and latency.

In this talk, I will introduce algorithms that leverage programmable switches for accelerating database operations and for improving network telemetry. Switches are orders of magnitude more efficient than traditional hardware accelerators, exist in the datapath, and are ideal for computation offloading. For telemetry, we will discuss how switches can probabilistically encode information across multiple packets to provide fine-grained network visibility with minimal overheads.

MondayDec 23, 201911:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Yael Tauman Kalai Title:Non-Interactive Publicly Verifiable Delegation with Applications to PPAD HardnessAbstract:opens in new windowin html    pdfopens in new window

We construct a delegation scheme for delegating polynomial time computations. Our scheme is publicly verifiable and non-interactive in the common reference string (CRS) model. The soundness of our scheme is based on an efficiently falsifiable decisional assumption on groups with bilinear maps. Prior to this work, publicly verifiable non-interactive delegation schemes were only known under knowledge assumptions (or in the Random Oracle model) or under non-standard assumptions related to obfuscation or multilinear maps.

In addition, our scheme has two desirable features: The proofs are unambiguous, in the sense that it is hard to find two distinct proofs for the same statement, and are updatable in the sense that given a proof for the statement that a Turing machine M transitions from configuration C_0 to C_T in T steps, one can efficiently generate a proof for the statement that M transitions from configuration C_0 to C_{T+1} in T+1 steps.

We show that such a delegation scheme implies PPAD hardness, by following a similar approach to that of Choudhuri et al. (STOC2019), who obtained PPAD hardness based on an assumption related to the soundness of the Fiat-Shamir paradigm.

 

This is based on two joint works, both with Omer Paneth and Lisa Yang.

SundayDec 22, 201913:30
Computer Science SeminarRoom 155
Speaker:Hilal AsiTitle:Nearly Instance-Optimal Mechanisms in Differential PrivacyAbstract:opens in new windowin html    pdfopens in new windowSPECIAL FOUNDATIONS OF COMPUTER SCIENCE SEMINAR

We develop differentially private mechanisms that achieve nearly instance-optimal losses, achieving lower loss than all appropriately unbiased mechanisms for any possible instance. We show that our mechanisms, with a modest increase in sample size (logarithmic or constant), are instance-optimal for a large family of functions. In contrast to existing mechanisms, which use the global or local sensitivity of the function being estimated, and so are necessarily instance suboptimal, the key to our construction is to use the inverse of the sensitivity. This allows a simple instance-optimal algorithm, and we develop several representative private mechanisms, including for the median and regression problems.

ThursdayDec 19, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Esty Kelman & Asaf KatzTitle:DOUBLE SEMINARAbstract:opens in new windowin html    pdfopens in new windowDOUBLE SEMINAR

Esty Kelman [TAU]   Title: Boolean Functions and Their Effective Degree

Abstract: The KKL Theorem (there is always a significantly influential variable) is sometimes tight as in the Tribes functions, but many times is not tight as in the Majority function. We propose a generalized version of KKL, with a new parameter - the effective degree, which replaces the role of the average degree in the KKL statement. This allows us to prove a tight result for many cases shows how large the influence of the most influential variable is. We generalize KKL in another manner finding a significantly influential variable within a subset of variables. This generalization, in turn, implies a generalized version of Friedgut Junta Theorem. If time allows, we will see how easily our Theorem implies the Friedgut Junta Theorem and stronger Junta results.

 

Asaf Katz [U Chicago]  Title: Measure rigidity for Anosov flows via the factorization method

Abstract: We show how the factorization method, pioneered by Eskin and Mirzakhani in their groundbreaking work about measure classification for the moduli space of translation surfaces, can be adapted to smooth ergodic theory.
Using this adaption, we show that for a quantitatively non-integrable Anosov flow, every generalized u-Gibbs measure is absolutely continuous with respect to the whole unstable manifold.

 

ThursdayDec 19, 201912:15
Vision and AIRoom 1
Speaker:Tammy Riklin RavivTitle:Hue-Net: Intensity-based Image-to-Image Translation with Differentiable Histogram Loss FunctionsAbstract:opens in new windowin html    pdfopens in new window

In the talk I will present the Hue-Net - a novel Deep Learning framework for Intensity-based Image-to-Image Translation.

The key idea is a new technique we term network augmentation which allows a differentiable construction of intensity histograms from images.

We further introduce differentiable representations of (1D) cyclic and joint (2D) histograms and use them for defining loss functions based on cyclic Earth Mover's Distance (EMD) and Mutual Information (MI). While the Hue-Net can be applied to several image-to-image translation tasks, we choose to demonstrate its strength on color transfer problems, where the aim is to paint a source image with the colors of a different target image. Note that the desired output image does not exist and therefore cannot be used for supervised pixel-to-pixel learning.

This is accomplished by using the HSV color-space and defining an intensity-based loss that is built on the EMD between the cyclic hue histograms of the output and the target images. To enforce color-free similarity between the source and the output images, we define a semantic-based loss by a differentiable approximation of the MI of these images.  

The incorporation of histogram loss functions in addition to an adversarial loss enables the construction of semantically meaningful and realistic images.

Promising results are presented for different datasets.

ThursdayDec 19, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 208
Speaker:Max Gurevich Title:Robinson-Schensted-Knuth correspondence at the service of p-adic GL_nAbstract:opens in new windowin html    pdfopens in new windowNOTE THE UNUSUAL DAY AND ROOM

In a joint work with Erez Lapid we constructed a new class of representations based on applying the RSK algorithm on Zelevinski's multisegments. Those constructions have the potential to be an alternative to the commonly used basis of standard representations. Intriguingly, this class also turned out to categorify a 45-year-old development in invariant theory: The Rota basis of standard bitableaux.
I will talk about this classical theme and its relation to representations of p-adic GL_n, as well the expected properties of our new class.

TuesdayDec 17, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Yuri Zarhin Title:Jordan properties of automorphism groups of varietiesAbstract:opens in new windowin html    pdfopens in new window

A classical theorem of Jordan asserts that each finite subgroup of the complex general linear group GL(n) is "almost commutative": it contains a commutative normal subgroup with index bounded by an universal constant that depends only on n.

We discuss an analogue and variants of this property for the groups of birational (and biregular) automorphisms of complex algebraic varieties, the diffeomorphisms groups of real manifolds and the groups of bimeromorphic (and biholomorphic) automorphisms of compact complex manifolds.

MondayDec 16, 201911:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Gil CohenTitle:Pseudo-random pseudo-distributionsAbstract:opens in new windowin html    pdfopens in new window

In this talk, we will discuss a new type of a pseudo-random object called a "pseudo-random pseudo-distribution". This object was introduced in the context of the BPL vs. L problem, and I will sketch a space-efficient construction of the latter for read-once branching programs that has near-optimal dependence on the error parameter. The talk is a distillation of a joint work with Mark Braverman and Sumegha Garg (the paper is available online: https://eccc.weizmann.ac.il/report/2017/161/).

SundayDec 15, 201911:15
Faculty SeminarRoom 155
Speaker:Mor Nitzan Title:Representation, inference and design of multicellular systemsAbstract:opens in new windowin html    pdfopens in new window

The past decade has witnessed the emergence of single-cell technologies that measure the expression level of genes at a single-cell resolution. These developments have revolutionized our understanding of the rich heterogeneity, structure, and dynamics of cellular populations, by probing the states of millions of cells, and their change under different conditions or over time. However, in standard experiments, information about the spatial context of cells, along with additional layers of information they encode about their location along dynamic processes (e.g. cell cycle or differentiation trajectories), is either lost or not explicitly accessible. This poses a fundamental problem for elucidating collective tissue function and mechanisms of cell-to-cell communication.
In this talk I will present computational approaches for addressing these challenges, by learning interpretable representations of structure, context and design principles for multicellular systems from single-cell information. I will first describe how the locations of cells in their tissue of origin and the resulting spatial gene expression can be probabilistically inferred from single-cell information by a generalized optimal-transport optimization framework, that can flexibly incorporate prior biological assumptions or knowledge derived from experiments. Inference in this case is based on an organization principle for spatial gene expression, namely a structural correspondence between distances of cells in expression and physical space, which we hypothesized and supported for different tissues. We used this framework to spatially reconstruct diverse tissues and organisms, including the fly embryo, mammalian intestinal epithelium and cerebellum, and further inferred spatially informative genes. Since cells encode multiple layers of information, in addition to their spatial context, I will also discuss several approaches for the disentanglement of single-cell gene expression into distinct biological processes, based on ideas rooted in random matrix theory and manifold learning. I will finally discuss how these results can be generalized to reveal principles underlying self-organization of cells into multicellular structures, setting the foundation for the computationally-directed design of cell-to-cell interactions optimized for specific tissue structure or function.

ThursdayDec 12, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Naomi Feldheim Title:Existence of persistence exponent for Gaussian stationary functionsAbstract:opens in new windowin html    pdfopens in new window

Let Z(t) be a Gaussian stationary function on the real line, and fix a level L>0.

We are interested in the asymptotic behavior of the persistence probability: P(T) = P( Z(t) > L, for all t in [0,T] ).
One would guess that for "nice processes", the behavior of P(T) should be exponential. For non-negative correlations this may be established via sub-additivity arguments. However, so far, not a single example with sign-changing correlations was known to exhibit existence of the limit of {Log P(T)}/T, as T approaches infinity (that is, to have a true "persistence exponent").
In the talk I will present a proof of existence of the persistence exponent, for processes whose spectral measure is monotone on [0,∞) and is continuous and non-vanishing at 0. This includes, for example, the sinc-kernel process (whose covariance function is sin(t)/t ).
Joint work with Ohad Feldheim and Sumit Mukherjee.

MondayDec 09, 201911:15
Computer Science SeminarRoom 155
Speaker:Karthik C.STitle:Clustering: How hard is it to classify data? Abstract:opens in new windowin html    pdfopens in new windowSPECIAL FOUNDATIONS OF COMPUTER SCIENCE SEMINAR

Two popular objectives optimized in clustering algorithms are k-means and k-median. The k-means (resp. k-median) problem in the L_p-metric is specified by n points as input and the goal is to classify the input point-set into k clusters such that the k-means (resp. k-median) objective is minimized. The best-known inapproximability factor in literature for these NP-hard problems in L_p-metrics were well-below 1.01. In this talk, we take a significant step to improve the hardness of approximating these problems in various L_p-metrics.

WednesdayDec 04, 201911:15
Vision and AIRoom 1
Speaker:Liad Pollak Title:Across Scales & Across Dimensions: Temporal Super-Resolution using Deep Internal LearningAbstract:opens in new windowin html    pdfopens in new windowPLEASE NOTE THE UNUSUAL DAY AND TIME

When a very fast dynamic event is recorded with a low framerate camera, the resulting video suffers from severe motion blur (due to exposure time) and motion aliasing (due to low sampling rate in time). True Temporal Super-Resolution (TSR) is more than just Temporal-Interpolation (increasing framerate). It also recovers new high temporal frequencies beyond the temporal nyquist limit of the input video, thus resolving both motion-blur and motion-aliasing. In this work we propose a "Deep Internal Learning" approach for true TSR. We train a video-specific CNN on examples extracted directly from the low-framerate input video. Our method exploits the strong recurrence of small space-time patches inside a single video sequence, both within and across different spatio-temporal scales of the video. We further observe (for the first time) that small space-time patches recur also across-dimensions of the video sequence - i.e., by swapping the spatial and temporal dimensions. In particular, the higher spatial resolution of video frames provides strong examples as to how to increase the temporal resolution of that video. Such internal video-specific examples give rise to strong self-supervision, requiring no data but the input video itself. This results in Zero-Shot Temporal-SR of complex videos, which removes both motion blur and motion aliasing, outperforming previous supervised methods trained on external video datasets.

* Joint work with Shai Bagon, Eyal Naor, George Pisha, Michal Irani

WednesdayDec 04, 201909:30
Algebraic Geometry and Representation Theory SeminarRoom 1
Speaker:Howard NuerTitle:Cubic Fourfolds: Rationality and Derived CategoriesAbstract:opens in new windowin html    pdfopens in new windowNOTE THE UNUSUAL DAY AND TIME

The question of determining if a given algebraic variety is rational is a notoriously difficult problem in algebraic geometry, and attempts to solve rationality problems have often produced powerful new techniques. A well-known open rationality problem is the determination of a criterion for when a cubic hypersurface of five-dimensional projective space is rational. After discussing the history of this problem, I will introduce the two conjectural rationality criteria that have been put forth and then discuss a package of tools I have developed with my collaborators to bring these two conjectures together. Our theory of Relative Bridgeland Stability has a number of other beautiful consequences such as a new proof of the integral Hodge Conjecture for Cubic Fourfolds and the construction of full-dimensional families of projective HyperKahler manifolds. Time permitting I'll discuss applications of the theory of relative stability conditions to problems other than cubic fourfolds.

TuesdayDec 03, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Eyal Kaplan Title:The generalized doubling method: multiplicity one and its applications Abstract:opens in new windowin html    pdfopens in new window

The doubling method, first introduced by Piatetski-Shapiro and Rallis in the 80s, has had numerous applications, e.g. to the theta correspondence and to arithmetic problems. In a series of recent works this method was generalized in several aspects, with an application to functoriality from classical groups to GL(N).

One crucial ingredient for the development of the theory is a multiplicity one result, obtained recently in a joint work with Dima and Rami.

I will briefly survey the method, discuss the multiplicity one result, and talk about applications to covering groups.

Parts of the talk are also based on a collaboration with Cai, Friedberg and Ginzburg.

MondayDec 02, 201911:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Seffi NaorTitle:Online Algorithms via ProjectionsAbstract:opens in new windowin html    pdfopens in new window

We present a new/old approach to the design of online algorithms via Bregman projections. This approach is applicable to a wide range of online problems and we discuss connections to previous work on online primal-dual algorithms. In particular, the k-server problem on trees and HSTs is considered. The projection-based algorithm for this problem turns out to have a competitive ratio that matches some of the recent results given by Bubeck et al. (STOC 2018), whose algorithm uses mirror-descent-based continuous dynamics prescribed via a differential inclusion.

Joint work with Niv Buchbinder, Anupam Gupta, and Marco Molinaro.

SundayDec 01, 201911:15
Faculty SeminarRoom 155
Speaker:Ilya GekhtmanTitle:Gibbs measures vs. random walks in negative curvatureAbstract:opens in new windowin html    pdfopens in new window

The ideal boundary of a negatively curved manifold naturally carries two types of measures.
On the one hand, we have conditionals for equilibrium (Gibbs) states associated to Hoelder potentials; these include the Patterson-Sullivan measure and the Liouville measure. On the other hand, we have stationary measures coming from random walks on the fundamental group.
We compare and contrast these two classes.First, we show that both of these of these measures can be associated to geodesic flow invariant measures on the unit tangent bundle, with respect to which closed geodesics satisfy different equidistribution properties. Second, we show that the absolute continuity between a harmonic measure and a Gibbs measure is equivalent to a relation between entropy, (generalized) drift and critical exponent, generalizing previous formulas of Guivarc'h, Ledrappier, and Blachere-Haissinsky-Mathieu. This shows that if the manifold (or more generally, a CAT(-1) quotient) is geometrically finite but not convex cocompact, stationary measures are always singular with respect to Gibbs measures.

A major technical tool is a generalization of a deviation inequality due to Ancona saying the so called Green distance associated to the random walk is nearly additive along geodesics in the universal cover.
Part of this is based on joint work with Gerasimov-Potyagailo-Yang and part on joint work with Tiozzo.

ThursdayNov 28, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Amir Dembo & Alexander Fish Title:DOUBLE SEMINARAbstract:opens in new windowin html    pdfopens in new window

Amir Dembo (Stanford) "Dynamics for spherical spin glasses: Disorder dependent initial conditions"

Abstract: In this talk, based on a joint work with Eliran Subag, I will explain how to rigorously derive the integro-differential equations that arise in the thermodynamic limit of the empirical correlation and response functions for Langevin dynamics in mixed spherical p-spin disordered mean-field models.
I will then compare the large time asymptotic of these equations in case of a uniform (infinite-temperature) starting point, to what one obtains when starting within one of the spherical bands on which the Gibbs measure  concentrates at low temperature, commenting on the existence of an aging phenomenon, and on the relations with the recently discovered geometric structure of the Gibbs measures at low temperature.

Alexander Fish (Sydney)  "Finite configurations in trees of positive growth rate"

Abstract: We will talk on the relation between the abundance of finite configurations that we observe in trees and their growth rate.
We will survey the Furstenberg-Weiss correspondence principle which relates a tree of positive growth rate with Markov process, and subsequently provides a quantitative relationship between the density of a finite subtree that appears in the tree with a quantity defined on the Markov space and which can be estimated by use of ergodic methods. We will sketch a proof of one direct and one inverse theorem relating the abundance of certain finite configurations and the growth rate of a tree. Some open problems related to this work will be discussed. Based on a joint work with Leo Jiang (Toronto).

ThursdayNov 28, 201912:15
Vision and AIRoom 155
Speaker:Sefi Bell KliglerTitle:Blind Super-Resolution Kernel Estimation using an Internal-GANAbstract:opens in new windowin html    pdfopens in new windowPLEASE NOTE THE UNUSUAL PLACE

Super resolution (SR) methods typically assume that the low-resolution (LR) image was downscaled from the unknown high-resolution (HR) image by a fixed "ideal" downscaling kernel (e.g. Bicubic downscaling). However, this is rarely the case in real LR images, in contrast to synthetically generated SR datasets. When the assumed downscaling kernel deviates from the true one, the performance of SR methods significantly deteriorates. This gave rise to Blind-SR - namely, SR when the downscaling kernel ("SR-kernel") is unknown. It was further shown that the true SR-kernel is the one that maximizes the recurrence of patches across scales of the LR image. In this paper we show how this powerful cross-scale recurrence property can be realized using Deep Internal Learning. We introduce "KernelGAN", an image-specific Internal-GAN, which trains solely on the LR test image at test time, and learns its internal distribution of patches. Its Generator is trained to produce a downscaled version of the LR test image, such that its Discriminator cannot distinguish between the patch distribution of the downscaled image, and the patch distribution of the original LR image. The Generator, once trained, constitutes the downscaling operation with the correct image-specific SR-kernel. KernelGAN is fully unsupervised, requires no training data other than the input image itself, and leads to state-of-the-art results in Blind-SR when plugged into existing SR algorithms.

MondayNov 25, 201909:00
Seminar in Geometry and TopologyRoom 155
Speaker:Mikhail ZaidenbergTitle:Zariski cancellation problem for surfacesAbstract:opens in new windowin html    pdfopens in new window

The Zariski Cancellation Problem asks when a stable isomorphism of affine varieties over an algebraically closed field implies an isomorphism. This is true for affine curves (Abhyankar, Eakin, and Heinzer 72), for the affine plane in zero characteristic (Miyanishi-Sugie and Fujita 79 −80), but false for general affine surfaces in zero characteristic (Danielewski 88) and for the affine space A^3 in positive characteristic (N. Gupta 13). The talk is devoted to a recent progress in the surface case over a field of zero characteristic (Bandman-Makar-Limanov, Dubouloz, Flenner and Kaliman, e.a.). It occurs to be possible to describe the moduli space of pairs of surfaces with isomorphic cylinders.

SundayNov 24, 201911:15
Faculty SeminarRoom 155
Speaker: Arie Levit Title:On Benjamini-Schramm convergence Abstract:opens in new windowin html    pdfopens in new window

Benjamini-Schramm convergence is a probabilistic notion useful in studying the asymptotic behavior of sequences of metric spaces. The goal of this talk is to discuss this notion and some of its applications from various perspectives, e.g. for groups, graphs, hyperbolic manifolds and locally symmetric spaces, emphasizing the distinction between the hyperbolic rank-one case and the rigid high-rank case. Understanding the "sofic" part of the Benjamini-Schramm space, i.e. all limit points of "finitary" objects, will play an important role. From the group-theoretic perspective, I will talk about sofic groups, i.e. groups which admit a probabilistic finitary approximation, as well as a companion notion of permutation stability. Several results and open problems will also be discussed.

ThursdayNov 21, 201912:15
Vision and AIRoom 1
Speaker:Ohad Fried Title:Tools for visual expression and communication.Abstract:opens in new windowin html    pdfopens in new window

Photos and videos are now a main mode of communication, used to tell stories, share experiences and convey ideas. However, common media editing tools are often either too complex to master, or oversimplified and limited. 
In this talk I will present my strategy towards the creation of media editing techniques that are easy to learn, yet expressive enough to reflect unique creative objectives. We will mostly discuss one specific domain --- human heads --- which are both extremely common (i.e. people care about people) and technologically challenging. I will present several works on editing video by editing text, perspective manipulation, and in-camera lighting feedback. I will also discuss exciting future opportunities related to neural rendering and digital representations of humans.

BIO
Ohad Fried is a postdoctoral research scholar at Stanford University. His work lies in the intersection of computer graphics, computer vision, and human-computer interaction. He holds a PhD in computer science from Princeton University, and an M.Sc. in computer science and a B.Sc. in computational biology from The Hebrew University. Ohad's research focuses on tools, algorithms, and new paradigms for photo and video editing. Ohad is the recipient of several awards, including a Siebel Scholarship and a Google PhD Fellowship. If you own a cable modem, there's a non-negligible chance that Ohad's code runs within it, so feel free to blame him for your slow internet connection.
www.ohadf.com
 

WednesdayNov 20, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Nathan Srebro Title:Optimization's Hidden Gift to Learning: Implicit RegularizationAbstract:opens in new windowin html    pdfopens in new window

It is becoming increasingly clear that implicit regularization afforded by the optimization algorithms play a central role in machine
learning, and especially so when using large, deep, neural networks. We have a good understanding of the implicit regularization afforded by stochastic approximation algorithms, such as SGD, for convex problem, and we understand and can characterize the implicit bias of different algorithms, and can design algorithms with specific biases. But in this talk I will focus on implicit biases of local search algorithms for non-convex underdetermined problem, such as deep networks. In an effort to uncover the implicit biases of gradient-based optimization of neural networks, which holds the key to their empirical success, I will discuss recent
work on implicit regularization for matrix factorization, linear convolutional networks, and two-layer ReLU networks, as well as a general bottom-up understanding on implicit regularization in terms of optimization geometry.

WednesdayNov 20, 201910:00
Faculty SeminarRoom 1
Speaker:Nir ShlezingerTitle:Exploiting Tasks, Structures, and AI in Digital Communications ReceiversAbstract:opens in new windowin html    pdfopens in new windowNOTE THE UNUSUAL DAY AND PLACE

The broad range of demands which next generation communications systems are required to meet, spanning from increased rates to efficient power consumption, cannot be satisfied by conventional architectures. These requirements give rise to a multitude of new challenges in modern communications and signal processing systems. In this talk we focus on two fundamental aspects of digital communications receivers - signal acquisition and symbol detection - discussing methods to improve their performance and reliability. For signal acquisition, we show how analog-to-digital convertors can be designed in light of the overall system task, achieving optimal-approaching performance while utilizing efficient bit-constrained samplers. Then, we discuss how recent advances in machine learning can be exploited for symbol detection, a task which is typically addressed using channel-model-based methods, such as the Viterbi algorithm and interference cancellation methods. We present a new framework for combining artificial intelligence and model-based algorithms, allowing the latter to be implemented in a data-driven manner. The resulting architectures are capable of approaching the established performance of model-based algorithms, while being invariant of the underlying statistical model, and learning it from a small amount of training samples.

TuesdayNov 19, 201916:00
Seminar in Geometry and TopologyRoom 1
Speaker:Gaetan Borot's Title:Topological recursion for Masur-Veech volumesAbstract:opens in new windowin html    pdfopens in new window

The moduli space of quadratic differentials on punctured Riemann surfaces admits a integral piecewise linear structure. It gives rise to a measure whose total mass is finite and called the Masur-Veech volume.

These volumes are also related to the asymptotic growth of the number of multicurves on hyperbolic surfaces, once averaged against the Weil-Petersson metric. Adopting this point of view, we show that the Masur-Veech volumes can be obtained as the constant term in a family of polynomials computed by the topological recursion, and as integration of a class on the moduli space of curves. Based on work of Moeller, we can prove the same statement but for a different family of polynomial (and different initial data for the topological recursion). This is based on joint works with Jorgen Andersen, Severin Charbonnier, Vincent Delecroix, Alessandro Giacchetto, Danilo Lewanski and Campbell Wheeler.

TuesdayNov 19, 201911:15
Special Guest LectureRoom 1
Speaker:Professor Yakov Pesin Title:Smooth models of systems with complicated stochastic behaviorsAbstract:opens in new windowin html    pdfopens in new window

To describe different levels of stochastic behavior Ergodic Theory introduces a hierarchy of ergodic properties. Among them ergodicity, mixing and the Bernoulli property are most used to study models in science with complicated "turbulent-like" behavior. However, "typical" systems in science are modeled by smooth (differentiable) dynamical systems acting on smooth phase spaces and the classical "smooth realization problem" asks whether any smooth phase space allows a dynamical system with prescribe collection of ergodic (stochastic) properties, in other words whether topology of the phase space may impose obstructions on a system to exhibit certain stochastic behavior. In this connection I also discuss two more important characteristics of stochastic behavior -- the rate of decay of correlations (rate of mixing) and the Central Limit Theorem -- and whether they too can be realized on any smooth phase space.   

MondayNov 18, 201911:15
Foundations of Computer Science ColloquiumRoom 155
Speaker:Uri StemmerTitle:Privately Learning Thresholds: Closing the Exponential GapAbstract:opens in new windowin html    pdfopens in new window

We study the sample complexity of learning threshold functions under the constraint of differential privacy. Unlike the non-private case, where the sample complexity is independent of the domain size, it turns our that for private learning the sample complexity must depend on the domain size $X$. Our current understanding of this task places its sample complexity somewhere between $\log^*|X|$ and $2^{\log^*|X|}$, where at least three different algorithms are known with sample complexity exponential in $\log^*|X|$. In this work we reduce this gap significantly, and show that the sample complexity is at most polynomial in $\log^*|X|$.

Joint work with Haim Kaplan, Katrina Ligett, Yishay Mansour, and Moni Naor.

MondayNov 18, 201909:00
Seminar in Geometry and TopologyRoom 155
Speaker:Sara Tukachinsky Title:Open Gromov-Witten invariants and underlying structuresAbstract:opens in new windowin html    pdfopens in new window

For X a symplectic manifold and L a Lagrangian submanifold, genus zero open Gromov-Witten (OGW) invariants count configurations of pseudoholomorphic disks in X with boundary conditions in L and various constraints at boundary and interior marked points. In a joint work with Jake Solomon from 2016, we define OGW invariants using bounding chains, a concept that comes from Floer theory. In a recent work, also joint with Solomon, we find that the generating function of OGW satisfies a system of PDE called open WDVV equation. This PDE translates to an associativity relation for a quantum product we define on the relative cohomology H^*(X,L). For the projective space, open WDVV gives rise to recursions that, together with other properties, allow the computation of all OGW invariants.

SundayNov 17, 201911:15
Faculty SeminarRoom 155
Speaker:Sarah Keren Title:Automated Design of Better Environments for Intelligent AgentsAbstract:opens in new windowin html    pdfopens in new window

Automated agents and humans increasingly interact and collaborate: at home, at the workplace, on the road, and in many other everyday settings. In all these settings, effectively recognizing what users try to achieve, providing relevant assistance (or, depending on the application, taking relevant preventive measures), and supporting an effective collaboration in the system are essential tasks. All these tasks can be enhanced via efficient system redesign, and often even subtle changes can yield great benefits. However, since these systems are typically complex, hand crafting good design solutions is hard.

Utility Maximizing Design (UMD) addresses this challenge by automating the design process. It does so by offering informed search strategies to automatically and efficiently find optimal design solutions for maximizing a variety of targeted objectives. One example is Goal Recognition Design (GRD), which seeks a redesign to an environment that minimizes the maximal progress an agent can make before its goal is revealed. A second is Equi-Reward Utility Maximizing Design (ER-UMD), which seeks to maximize the performance of a planning agent in a stochastic environment. The third, Reinforcement Learning Design (RLD), parallels ER-UMD, but considers an environment with a reinforcement learning agent.

Among the different ways that may be available to change a setting, the talk will focus on information shaping, which consists of selectively revealing information to a partially informed agent in order to maximize its performance. As an example application, I will demonstrate how information shaping can be applied to enhance the performance of a robot that is navigating in an unfamiliar environment. The talk will conclude with a discussion of future extensions and various applications related to the agenda of Utility Maximizing Design, including human-robot collaboration, intrusion detection, and assisted cognition.

ThursdayNov 14, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Jonathan Hermon Title:Random Cayley graphsAbstract:opens in new windowin html    pdfopens in new window

We consider the random Cayley graph of a finite group $G$ formed by picking $k$ random generators uniformly at random:

  1. We prove universality of cutoff (for the random walk) and a concentration of measure phenomenon in the Abelian setup (namely, that all but $o(|G|)$ elements lie at distance $[R-o(R),R-o(R)]$ from the origin, where $R$ is the minimal ball in $Z^k$ of size at least $|G|$), provided $k \gg 1$ is large in terms of the size of the smallest generating set of $G$. As conjectured by Aldous and Diaconis, the cutoff time is independent of the algebraic structure (it is given by the time at which the entropy of a random walk on $Z^k$ is $\log|G|$).
  2. We prove analogous results for the Heisenberg $H_{p,d}$  groups  of  $d \times d$ uni-upper triangular matrices with entries defined mod $p$, for $p$ prime and $d$ fixed or diverging slowly.
  3.  Lastly, we resolve a conjecture of D. Wilson that if $G$ is a group of size at most $2^d$ then for all $k$ its mixing time in this model is as rapid as that of $Z_2^d$ and likewise, that the slowest mixing $p$-group of a given size is $Z_p^d$.

(Joint work with Sam Thomas.)

TuesdayNov 12, 201916:00
Seminar in Geometry and TopologyRoom 1
Speaker:Sara Tukachinsky Title:Open Gromov-Witten invariants and underlying structuresAbstract:opens in new windowin html    pdfopens in new window

For X a symplectic manifold and L a Lagrangian submanifold, genus zero open Gromov-Witten (OGW) invariants count configurations of pseudoholomorphic disks in X with boundary conditions in L and various constraints at boundary and interior marked points. In a joint work with Jake Solomon from 2016, we define OGW invariants using bounding chains, a concept that comes from Floer theory. In a recent work, also joint with Solomon, we find that the generating function of OGW satisfies a system of PDE called open WDVV equation. This PDE translates to an associativity relation for a quantum product we define on the relative cohomology H^*(X,L). For the projective space, open WDVV gives rise to recursions that, together with other properties, allow the computation of all OGW invariants.

TuesdayNov 12, 201911:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Michael MargaliotTitle:Revisiting Totally Positive Differential Systems: A Tutorial and New ResultsAbstract:opens in new windowin html    pdfopens in new window

A matrix is called totally nonnegative (TN) if all its minors are nonnegative, and totally positive (TP) if all its minors are positive. Multiplying a vector by a TN matrix does not increase the number of sign variations in the vector. In a largely forgotten paper, Schwarz (1970) considered matrices whose exponentials are TN or TP. He also analyzed the evolution of the number of sign changes in the vector solutions of the corresponding linear system.
In a seemingly different line of research,  Smillie (1984), Smith (1991), and others analyzed the stability of nonlinear tridiagonal cooperative systems by using the number of sign variations in the derivative vector as an integer-valued Lyapunov function.
We provide a tutorial on these fascinating research topics and show that they are intimately related. This allows to derive generalizations of the results by Smillie (1984) and Smith (1991) while simplifying the proofs.  This also opens the door to many new and interesting research directions.
Joint work with and Eduardo D. Sontag, Northeastern University.
 

ThursdayOct 31, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Mikhail Ostrovskii Title:Geometry of transportation cost (a.k.a. Earth Mover or Wasserstein distance)Abstract:opens in new windowin html    pdfopens in new window

We consider (finitely supported) transportation problems on a metric space M. They form a vector space TP(M). The optimal transportation cost for such transportation problems is a norm on this space. This normed space is of interest for the theory of metric embeddings because the space M embeds into it isometrically. I am going to talk about geometry of such normed spaces. The most important questions for this talk are relations of these spaces with $L_1$ and $L_\infty$ spaces.

MondayOct 28, 201909:00
Seminar in Geometry and TopologyRoom 155
Speaker:Guillaume TaharTitle:Cantor-Bendixson rank of meromorphic differentialsAbstract:opens in new windowin html    pdfopens in new window

Meromorphic differentials define flat metrics with singularities on Riemann surfaces. Directions of geodesic segments between singularities are a closed subset of the circle. The Cantor-Bendixson rank of their set of directions is a descriptive set-theoretic measure complexity. Drawing on a previous work of David Aulicino, we prove a sharp upper bound that depends only on the genus of the underlying topological surface.

ThursdayOct 24, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Sergey G. BobkovTitle:Transport bounds for empirical measuresAbstract:opens in new windowin html    pdfopens in new window

We will be discussing a Fourier-analytic approach to optimal matching between independent samples, with an elementary proof of the Ajtai-Komlos-Tusnady theorem.
The talk is based on a joint work with Michel Ledoux.  

TuesdaySep 24, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Arkady Berenstein Title:Noncommutative Catalan numbersAbstract:opens in new windowin html    pdfopens in new window

The goal of my talk (based on joint work with Vladimir Retakh) is to introduce noncommutative analogs of Catalan numbers c_n which belong to the free Laurent polynomial algebra L_n in n generators. Our noncommutative Catalan numbers C_n admit interesting (commutative and noncommutative) specializations, one of them related to Garsia-Haiman (q,t)-versions, another -- to solving noncommutative quadratic equations. We also establish total positivity of the corresponding (noncommutative) Hankel matrices H_n and introduce two kinds of noncommutative binomial coefficients which are instrumental in computing the inverse of H_n and its positive factorizations, and other combinatorial identities involving C_n.
If time permits, I will explain the relationship of the C_n with the:

1. noncommutative Laurent Phenomenon, which was previously established for Kontsevich rank 2 recursions and all marked surfaces

2. noncommutative orthogonal polynomials, which can be viewed as noncommutative determinants of an extended matrix H_n.

ThursdaySep 19, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Jian-Rong Li Title:Quantum affine algebras and GrassmanniansAbstract:opens in new windowin html    pdfopens in new window

Let $\mathfrak{g}=\mathfrak{sl}_n$ and $U_q(\widehat{\mathfrak{g}})$ the corresponding quantum affine algebra. Hernandez and Leclerc proved that there is an isomorphism $\Phi$ from the Grothendieck ring $\mathcal{R}_{\ell}$ of a certain subcategory $\mathcal{C}_{\ell}$ of finite-dimensional $U_q(\widehat{\mathfrak{g}})$-modules to a certain quotient $\mathbb{C}[{\rm Gr}(n, n+\ell+1, \sim)]$ of a Grassmannian cluster algebra. We proved that this isomorphism induces an isomorphism $\widetilde{\Phi}$ from the monoid of dominant monomials to the monoid of semi-standard Young tableaux. Using this result and the results of Qin and the results of Kashiwara, Kim, Oh, and Park, we have that every cluster monomial (resp. cluster variable) in a Grassmannian cluster algebra is of the form $ch(T)$ for some real (resp. prime real) rectangular semi-standard Young tableau $T$, where $ch(T)$ is certain map obtained from a formula of Arakawa--Suzuki. We also translated Arakawa--Suzuki's formula to the setting of $q$-characters and apply it to study real modules, prime modules, and compatibility of cluster variables.
This is joint work with Wen Chang, Bing Duan, and Chris Fraser.

WednesdaySep 11, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 1
Speaker:Thorsten HeidersdorfTitle:How zero is zero?Abstract:opens in new windowin html    pdfopens in new window

The quotient of a monoidal category by its largest tensor ideal - given by the so-called negligible morphisms - is often a semisimple category.
I will introduce a generalization of the notion of negligible morphism for some monoidal categories and discuss the associated tensor ideals in the setting of Deligne categories and tilting modules for quantum groups and algebraic groups. It turns out that they are related to other notions from representation theory like modified dimensions and the a-function.

TuesdaySep 10, 201910:15
Algebraic Geometry and Representation Theory SeminarRoom 1
Speaker:Oded YacobiTitle:Representations of Coulomb branches and Gelfand-Tsetlin modulesAbstract:opens in new windowin html    pdfopens in new windownote unusual room

Let g be a Lie algebra of type ADE. To a pair of weights of g (one dominant, the other arbitrary) we associate a group G and a representation N consisting of framed quiver representations of the Dynkin diagram of g. From (G,N) we can construct two varieties. The Higgs branch is the categorical quotient of N by G, which in this case is the Nakajima quiver variety and has been studied for over 25 years. The Coulomb branch has a much more complicated definition that was only recently discovered by Braverman, Finkelberg, and Nakajima. There is a duality between these spaces, which is sometimes referred to as 3d mirror symmetry or symplectic duality.

In this talk I'll try to explain the definition of the Coulomb branch, and why you might care. I will discuss its deformation quantization, which appears naturally from the construction. I'll describe also our recent result which provides an equivalence between representations of the deformation quantisation, and modules over a seemingly very different algebra which is defined combinatorially and arises in categorical representation theory. This equivalence has several interesting consequences, e.g. it provides a classification for the irreducible Gelfand-Tsetlin modules of gl(n), which was previously only known up to n=3.

 

This talk is based on https://arxiv.org/abs/1806.07519

SundaySep 08, 201911:15
Vision and AIRoom 1
Speaker:Trevor DarrellTitle:Adapting and Explaining Deep Learning for Autonomous SystemsAbstract:opens in new windowin html    pdfopens in new windowSpecial Seminar

Learning of layered or "deep" representations has recently enabled low-cost sensors for autonomous vehicles and efficient automated analysis of visual semantics in online media. But these models have typically required prohibitive amounts of training data, and thus may only work well in the environment they have been trained in.  I'll describe recent methods in adversarial adaptive learning that excel when learning across modalities and domains. Further, these models have been unsatisfying in their complexity--with millions of parameters--and their resulting opacity. I'll report approaches which achieve explainable deep learning models, including both introspective approaches that visualize compositional structure in a deep network, and third-person approaches that can provide a natural language justification for the classification decision of a deep model.

TuesdayAug 20, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 1
Speaker:Hans WenzlTitle:Classifying certain classes of braid representationsAbstract:opens in new windowin html    pdfopens in new windowNOTE CHANGE IN ROOM

Representations of braid groups have appeared in many different areas such as topology, statistical mechanics, conformal field theory, braided tensor categories and others. In order to compare these, intrinsic characterizations of such representations are desirable. These have been known for some time for representations in connection with vector representations of classical Lie types, in terms of Hecke algebras and so-called BMW algebras. We review these and show how these results can be extended to include more cases related to exceptional Lie types. In particular, we obtain new classes of braid representations where the images of the generators satisfy a cubic equation. Time permitting, we discuss applications of these results such as Schur-Weyl type duality theorems and classification of braided tensor categories.

TuesdayAug 13, 201914:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Stefano SoattoTitle:The Information in the Weights of a Deep Network, and its consequences for transfer learning and critical learning periodsAbstract:opens in new windowin html    pdfopens in new windowNOTE THE UNUSUAL TIME
The information in the weights of deep networks plays a key role in understanding their behavior. When used as a regularizer during training (either explicitly or implicitly) it induces generalization through the PAC-Bayes bound. Rather than being computed and minimized explicitly, it can be directly controlled by injecting noise in the weights, a process known as Information Dropout. It can also be shown that stochastic gradient descent (SGD), when taken to the continuous limit and interpreted in the Wasserstein metric, adds the information in the weights as inductive bias, even if not explicitly present in the loss function. The Emergence Bound shows that, provided that a trained network has sufficient capacity, minimizing the information in the weights, which is a function of the training set consisting of data seen in the past, guarantees minimality, invariance, and independence of the components of the representation of the test (future) data. The trace of the information in the weights during training shows that, rather than increasingly monotonically through the learning process, as one would expect, first increases rapidly in the first few epochs, and then decreases to an asymptote that is a fraction of its peak. This unusual behavior qualitatively follows the sensitivity to critical learning periods observed in biological systems, from cats to humans, as well as recently in deep artificial networks. Together with the Emergence Bound and the PAC-Bayes bound, this shows that forgetting is a necessary part of the learning process, and happens while precision increases monotonically. The information in the weights can also be used to defined a topology in the space of tasks, and an asymmetric distance that can be used to predict the cost and performance of fine-tuning a network trained for a different task, without actually performing the experiment. These phenomena collectively point to the importance of the dynamics of learning, and suggests that studying the transient behavior can yield insight beyond those that can be gleaned from the asymptotics. Depending on the context, we use Shannon, Fisher, or Kolmogorov information to prove the results described.
SundayAug 04, 201913:15
Vision and AIRoom 1
Speaker:Rene VidalTitle:On the Implicit Bias of DropoutAbstract:opens in new windowin html    pdfopens in new window

Dropout is a simple yet effective regularization technique that has been applied to various machine learning tasks, including linear classification, matrix factorization and deep learning. However, the theoretical properties of dropout as a regularizer remain quite elusive. This talk will present a theoretical analysis of dropout for single hidden-layer linear neural networks. We demonstrate that dropout is a stochastic gradient descent method for minimizing a certain regularized loss. We show that the regularizer induces solutions that are low-rank, in the sense of minimizing the number of neurons. We also show that the global optimum is balanced, in the sense that the product of the norms of incoming and outgoing weight vectors of all the hidden nodes equal. Finally, we provide a complete characterization of the optimization landscape induced by dropout.  

ThursdayJul 25, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Jonathan Hermon Title:Anchored expansion in supercritical percolation on nonamenable graphs.Abstract:opens in new windowin html    pdfopens in new window

Let G be a transitive nonamenable graph, and consider supercritical Bernoulli bond percolation on G. We prove that the probability that the origin lies in a finite cluster of size n decays exponentially in n. We deduce that:

  1. Every infinite cluster has anchored expansion almost surely. This answers positively a question of Benjamini, Lyons, and Schramm (1997).
  2. Various observables, including the percolation probability and the truncated susceptibility are analytic functions of p throughout the entire supercritical phase.

Joint work with Tom Hutchcroft.

TuesdayJul 16, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Jacob Greenstein Title:Cacti and spectra of monomial braidingsAbstract:opens in new windowin html    pdfopens in new window
ThursdayJul 11, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Shamgar GurevitchTitle:Harmonic Analysis on GL_n over finite fieldsAbstract:opens in new windowin html    pdfopens in new window

There are many formulas that express interesting properties of a finite group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}:

$$

trace(\rho(g))/dim(\rho),

$$

for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of the mentioned type for analyzing certain random walks on G.

Recently, we discovered that for classical groups G over finite fields there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for GLn over finite fields, and explain how one can apply the results to verify mixing time and rate for certain random walks.

The talk will assume basic notions of linear algebra in Hilbert spaces, and the definition of a group.

This is joint work with Roger Howe (Yale and Texas AM).

ThursdayJul 11, 201912:15
Vision and AIRoom 1
Speaker:Anat Levin Title:A Monte Carlo Framework for Rendering Speckle Statistics in Scattering MediaAbstract:opens in new windowin html    pdfopens in new window

We present a Monte Carlo rendering framework for the physically-accurate simulation of speckle patterns arising from volumetric scattering of coherent waves. These noise-like patterns are characterized by strong statistical properties, such as the so-called memory effect. These properties are at the core of imaging techniques for applications as diverse as tissue imaging, motion tracking, and non-line-of-sight imaging. Our rendering framework can replicate these properties computationally, in a way that is orders of magnitude more efficient than alternatives based on directly solving the wave equations. At the core of our framework is a path-space formulation for the covariance of speckle patterns arising from a scattering volume, which we derive from first principles. We use this formulation to develop two Monte Carlo rendering algorithms, for computing speckle covariance as well as directly speckle fields. While approaches based on wave equation solvers require knowing the microscopic position of wavelength-sized scatterers, our approach takes as input only bulk parameters describing the statistical distribution of these scatterers inside a volume. We validate the accuracy of our framework by comparing against speckle patterns simulated using wave equation solvers, use it to simulate memory effect observations that were previously only possible through lab measurements, and demonstrate its applicability for computational imaging tasks. 

Joint work with Chen Bar, Marina Alterman, Ioannis Gkioulekas 

TuesdayJul 09, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Thanasin Nampaisarn Title:Translation Functors of Categories O for Root-Reductive Lie AlgebrasAbstract:opens in new windowin html    pdfopens in new window

Root-reductive Lie algebras form a special type of reasonably well behaved infinite-dimensional Lie algebras. In this talk, we shall define a version of Bernstein-Gelfand-Gelfand categories O for root-reductive Lie algebras, which we called extended categories O and briefly discuss some properties of these categories. Let g be a rootreductive Lie algebra containing a splitting Borel subalgebra b satisfying a special additional condition called the Dynkin condition. The extended category O corresponding to g and b is denoted by O-bar.
The category O-bar can be decomposed analogously to the finite-dimensional cases into blocks. The main object of this talk is to give a construction of translation functors of O-bar. Then we shall see that some objects such as tilting modules arise by applying the translation functors to Verma modules just as in the finite-dimensional cases. Furthermore, the translation functors establish equivalences between some blocks of the category O-bar.

MondayJul 08, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Shikha Singh Title:The Mechanism Design Approach to Interactive Proofs Abstract:opens in new windowin html    pdfopens in new window

The model of interactive proofs, introduced nearly two and a half decade, is now increasingly widely being used to design computation-outsourcing protocols. In an interactive proof, an honest party interacts with powerful but strategic provers, to elicit from them the correct answer to a computational question. Classical interactive proofs assume that the provers are adversarial (i.e., they want to mislead the verifier) and cooperative (work together as a team).

In this talk, I will present my work on a new payment-based interactive-proof system, called rational proofs. In rational proofs, the provers are not adversarial but rational, that is, they want to maximize the payment received from the verifier. Using principles from mechanism design, I will show how these payments can be used to leverage correctness from multiple provers who are either cooperative or non-cooperative in nature. I will also present how the guarantees of rational proofs are related to the soundness and completeness guarantees of classical interactive proofs.

Bio: Shikha Singh is currently an Assistant Professor of Computer Science at Wellesley College and will be joining Williams College as an Assistant Professor in Fall 2019. She obtained her PhD in Computer Science from Stony Brook University and her Integrated MSc. in Mathematics and Computing from IIT Kharagpur. Her broad research interests include algorithmic game theory, algorithms and data structures for big data, and complexity theory.

ThursdayJul 04, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Eviatar Procaccia Title:Stationary Hastings-Levitov modelAbstract:opens in new windowin html    pdfopens in new window

We construct and study a stationary version of the Hastings-Levitov(0) model. We prove that unlike the classical model, in the stationary case particle sizes are constant in expectation, yielding that this model can be seen as a tractable off-lattice Diffusion Limited Aggregation (DLA). The stationary setting together with a geometric interpretations of the harmonic measure yields new geometric results such as stabilization, finiteness of arms and unbounded width in mean of arms. Moreover we can present an exact conjecture for the fractal dimension.

ThursdayJul 04, 201912:15
Vision and AIRoom 1
Speaker:Nadav Dym Title:Linearly Converging Quasi Branch and Bound Algorithms for Global Rigid RegistrationAbstract:opens in new windowin html    pdfopens in new window

WednesdayJul 03, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Alon CohenTitle:Learning Linear-Quadratic Regulators Efficiently with only $\sqrt{T}$ RegretAbstract:opens in new windowin html    pdfopens in new window

The linear-quadratic regulator is a simple and ubiquitous model in optimal control. Classical results in control theory pertain to asymptotic convergence and stability of such systems. Recently, it has received renewed interest from a learning-theoretic perspective with a focus on computational tractability and finite-time convergence guarantees. Among the most challenging problems in LQ control is that of adaptive control: regulating a system with parameters that are initially unknown and have to be learned while incurring the associated costs. In its modern incarnation, this problem is approached through the lens of online learning with partial feedback (e.g., multi-armed bandits) and regret minimization. Still, recent results derive algorithms that are either computationally intractable or suffer from high regret. In this talk, I will present the first computationally-efficient algorithm with $\widetilde O(\sqrt{T})$ regret for learning in linear quadratic control systems with unknown dynamics. By that, this resolves an open question of Abbasi-Yadkori and Szepesvari (2011) and Dean, Mania, Matni, Recht, and Tu (2018). The key to the efficiency of our algorithm is in a novel reformulation of the LQ control problem as a convex semi-definite program. This is joint work with Tomer Koren and Yishay Mansour presented at ICML 2019.

ThursdayJun 27, 201912:15
Vision and AIRoom 1
Speaker:Ehud Barnea Title:Exploring the Bounds of the Utility of Context for Object DetectionAbstract:opens in new windowin html    pdfopens in new window

The recurring context in which objects appear holds valuable information that can be employed to predict their existence. This intuitive observation indeed led many researchers to endow appearance-based detectors with explicit reasoning about context. The underlying thesis suggests that stronger contextual relations would facilitate greater improvements in detection capacity. In practice, however, the observed improvement in many cases is modest at best, and often only marginal. In this work we seek to improve our understanding of this phenomenon, in part by pursuing an opposite approach. Instead of attempting to improve detection scores by employing context, we treat the utility of context as an optimization problem: to what extent can detection scores be improved by considering context or any other kind of additional information? With this approach we explore the bounds on improvement by using contextual relations between objects and provide a tool for identifying the most helpful ones. We show that simple co-occurrence relations can often provide large gains, while in other cases a significant improvement is simply impossible or impractical with either co-occurrence or more precise spatial relations. To better understand these results we then analyze the ability of context to handle different types of false detections, revealing that tested contextual information cannot ameliorate localization errors, severely limiting its gains. These and additional insights further our understanding on where and why utilization of context for object detection succeeds and fails

ThursdayJun 20, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Mark Rudelson (UMich) & Serguei Popov (IMECC)Title:Double seminar Abstract:opens in new windowin html    pdfopens in new window

Speaker 1: Mark Rudelson (UMich)

Title: Circular law for sparse random matrices.

Abstract: Consider a sequence of $n$ by $n$ random matrices $A_n$ whose entries are independent identically distributed random variables. The circular law asserts that the distribution of the eigenvalues of properly normalized matrices $A_n$ converges to the uniform measure on the unit disc as $n$ tends to infinity. We prove this law for sparse random matrices under the optimal sparsity assumption. Joint work with Konstantin Tikhomirov.

Speaker 2: Serguei Popov (IMECC)

Title: On the range of a two-dimensional conditioned random walk

Abstract: We consider the two-dimensional simple random walk conditioned on never hitting the origin. This process is a Markov chain, namely, it is the Doob $h$-transform of the simple random walk with respect to the potential kernel. It is known to be transient and we show that it is "almost recurrent'' in the sense that each infinite set is visited infinitely often, almost surely. We prove that, for a "typical large set", the proportion of its sites visited by the conditioned walk is approximately a Uniform$[0,1]$ random variable. Also, given a set $G\subset\R^2$ that does not "surround" the origin, we prove that a.s.\ there is an infinite number of $k$'s such that $kG\cap \Z^2$ is unvisited. These results suggest that the range of the conditioned walk has "fractal" behavior. This is a joint work with Nina Gantert and Marina Vachkovskaia, see arxiv.org/abs/1804.00291 Also, there is much more about conditioned walks in my new book (www.ime.unicamp.br/~popov/2srw.pdf, work in progress). Comments and suggestions on the latter are very welcome!

WednesdayJun 19, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Daniel SoudryTitle:Theoretical and Empirical Investigation of Several Common Practices in Deep LearningAbstract:opens in new windowin html    pdfopens in new window

We examine several empirical and theoretical results on the training of deep networks. For example, Why are common "over-fitting" indicators (e.g., very low training error, high validation loss) misleading? Why, sometimes, early-stopping time never arrives? Why can adaptive rate methods (e.g., adam) degrade generalization? Why commonly used loss functions exhibit better generalization than others? Why use weight decay before batch-norm? When can we use low numerical precision, and how low can we get? and discuss the practical implications of these results.

Bio == Since October 2017, Daniel soudry is an assistant professor (Taub Fellow) in the Department of Electrical Engineering at the Technion, working in the areas of machine learning and theoretical neuroscience. Before that, he did his post-doc (as a Gruss Lipper fellow) working with Prof. Liam Paninski in the Department of Statistics, the Center for Theoretical Neuroscience the Grossman Center for Statistics of the Mind at Columbia University. He did his Ph.D. in the Department of Electrical Engineering at the Technion, Israel Institute of technology, under the guidance of Prof. Ron Meir. He received his B.Sc. degree in Electrical Engineering and Physics from the Technion.

TuesdayJun 18, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Michał Zydor Title:Periods of automorphic forms over reductive groups Abstract:opens in new windowin html    pdfopens in new window

Periods of automorphic forms have an important place in the theory of automorphic representations. They are often related to (special values of) L-functions and have applications to arithmetic geometry and analytic number theory. For an automorphic form on a group G, a period is its integral over a subgroup of G. If the automorphic form is not cuspidal such integrals are usually divergent. It is nonetheless possible in many cases to extend the definition of the period to almost all automorphic forms which has direct applications to the study of the given period. In this talk I will describe a general procedure of defining such periods in the case when the subgroup is reductive.
I will also discuss the joint work with A. Pollack and C. Wan that applies this to the study of certain periods and their relations to special values of L-functions confirming predictions of Sakellaridis and Venkatesh.

MondayJun 17, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Dana Moshkovitz Title:Nearly Optimal Derandomization Under AssumptionsAbstract:opens in new windowin html    pdfopens in new window

Existing proofs that deduce BPP=P from circuit lower bounds convert randomized algorithms to deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms to deterministic algorithms with nearly minimal slowdown. Specifically, assuming exponential lower bounds against non-deterministic circuits we convert randomized algorithms that err rarely to deterministic algorithms with a similar run-time, and general randomized algorithms to deterministic algorithms whose run-time is slower by only a nearly linear factor.
Our results follow from a new connection between pseudo-entropy generators and locally list recoverable codes.

This is joint work with Dean Doron, Justin Oh and David Zuckerman

ThursdayJun 13, 201912:15
Vision and AIRoom 1
Speaker:Eitan Richardson Title:On GANs and GMMsAbstract:opens in new windowin html    pdfopens in new window

A longstanding problem in machine learning is to find unsupervised methods that can learn the statistical structure of high dimensional signals. In recent years, GANs have gained much attention as a possible solution to the problem, and in particular have shown the ability to generate remarkably realistic high resolution sampled images. At the same time, many authors have pointed out that GANs may fail to model the full distribution ("mode collapse") and that using the learned models for anything other than generating samples may be very difficult. In this paper, we examine the utility of GANs in learning statistical models of images by comparing them to perhaps the simplest statistical model, the Gaussian Mixture Model. First, we present a simple method to evaluate generative models based on relative proportions of samples that fall into predetermined bins. Unlike previous automatic methods for evaluating models, our method does not rely on an additional neural network nor does it require approximating intractable computations. Second, we compare the performance of GANs to GMMs trained on the same datasets. While GMMs have previously been shown to be successful in modeling small patches of images, we show how to train them on full sized images despite the high dimensionality. Our results show that GMMs can generate realistic samples (although less sharp than those of GANs) but also capture the full distribution, which GANs fail to do. Furthermore, GMMs allow efficient inference and explicit representation of the underlying statistical structure. Finally, we discuss how GMMs can be used to generate sharp images.

TuesdayJun 11, 201916:15
Seminar in Geometry and TopologyRoom 1
Speaker:Gal Binyamini Title:Point counting for foliations over number fieldsAbstract:opens in new windowin html    pdfopens in new window

We consider an algebraic  variety $V$ and its foliation, both defined over a number field. Given a (compact piece of a) leaf $L$ of the foliation, and a subvariety $W$ of complementary codimension, we give an upper bound for the number of intersections between $L$ and $W$. The bound depends polynomially on the degree of $W$, the logarithmic height of $W$, and the logarithmic distance between $L$ and the locus of points where leafs of the foliation intersect $W$ improperly.
Using this theory we prove the Wilkie conjecture for sets defined using leafs of foliations under a certain assumption about the algebraicity locus. For example, we prove the if none of the leafs contain algebraic curves then the number of algebraic points of degree $d$ and log-height $h$ on a (compact piece of a) leaf grows polynomially with $d$ and $h$. This statement and its generalizations have many applications in diophantine geometry following the Pila-Zannier strategy.

I will focus mostly on the proof of the main statement, which uses a combination of differential-algebraic methods related to foliations with some ideas from complex geometry and value distribution theory. If time permits I will briefly discuss the applications to counting algebraic points and diophantine geometry at the end.

MondayJun 10, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Yuval Filmus Title:Twenty (simple) questionsAbstract:opens in new windowin html    pdfopens in new window

20 questions is one of the simplest examples of a combinatorial search game:
lice thinks of an English word, and Bob's job is to figure it out by asking Yes/No questions. The worst-case complexity of the game is clearly log n, so to spice things up,we assume that Alice chooses her input according to some distribution known to Bob, and Bob now aims to minimize the expected number of questions.
An optimal solution to this problem was found already in 1952. However, the optimal strategy could ask arbitrarily complex Yes/No questions. We ask what happens when we constrain Bob to asking only "simple" questions, and what happens if Alice is allowed to lie a few times.

Joint work with Yuval Dagan (MIT), Ariel Gabizon, Daniel Kane (UCSD) and Shay Moran (IAS).

 

ThursdayJun 06, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Yotam Smilansky Title:Statistics of colored Kakutani sequences of partitionsAbstract:opens in new windowin html    pdfopens in new window

We consider statistical questions concerning colored sequences of partitions, produced by applying a partition process which was first introduced by Kakutani for the 1-dimensional case. This process can be generalized as the application of a fixed multiscale substitution rule, defined on a finite set of colored sets in R^d, on elements of maximal measure in each partition. Colored sets appearing in the sequence are modeled by certain flows on an associated directed weighted graph, and natural statistical questions can be reformulated as questions on the distribution of paths on graphs. Under some incommensurability assumptions, we show that special properties of Laplace transforms of the relevant counting functions imply explicit statistical results

MondayJun 03, 201914:30
Foundations of Computer Science ColloquiumRoom 1
Speaker:Ido Shahaf Title:Tight Tradeoffs in Searchable Symmetric EncryptionAbstract:opens in new windowin html    pdfopens in new window

A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). In this talk, we survey recent constructions and matching lower bounds, and discuss their underlying techniques.

Based on joint works with Gilad Asharov, Moni Naor, and Gil Segev.

ThursdayMay 30, 201912:15
Vision and AIRoom 1
Speaker:Yael Moses Title:On the Role of Geometry in Geo-LocalizationAbstract:opens in new windowin html    pdfopens in new window

We consider the geo-localization task - finding the pose (position & orientation) of a camera in a large 3D scene from a single image. We aim toexperimentally explore the role of geometry in geo-localization in a convolutional neural network (CNN) solution. We do so by ignoring the often available texture of the scene. We therefore deliberately avoid using texture or rich geometric details and use images projected from a simple 3D model of a city, which we term lean images. Lean images contain solely information that relates to the geometry of the area viewed (edges, faces, or relative depth). We find that the network is capable of estimating the camera pose from the lean images, and it does so not by memorization but by some measure of geometric learning of the geographical area. The main contributions of this work are: (i) providing insight into the role of geometry in the CNN learning process; and (ii) demonstrating the power of CNNs for recovering camera pose using lean images.
 
This is a joint work with Moti Kadosh  & Ariel Shamir
 

TuesdayMay 28, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Maria GorelikTitle:Virasoro Lie algebra and its relatives in the super-worldAbstract:opens in new windowin html    pdfopens in new window

This will be a very introductory talk about Virasoro Lie algebra and its super-analogues: Ramond and Neveu-Schwarz Lie superalgebras.

MondayMay 27, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Rotem Oshman Title:Distributed Property Testing -- Progress and ChallengesAbstract:opens in new windowin html    pdfopens in new window

In this talk I will describe some recent work on distributed property testing in the networks with bounded bandwidth (the CONGEST model): we have a network of computing nodes communicating over some initially-unknown network graph, where every communication link can carry a bounded number of bits per round. Some simple-looking problems, such as checking if the network contains a 4-cycle, are known to be very hard in this model, and this motivates us to consider property testing instead of exact solutions.

I will describe distributed property testing algorithms for two problems: subgraph-freeness, where we wish to determine whether the network graph contains some fixed constant-sized subgraph H; and uniformity testing, where every node of the network draws samples from an unknown distribution, and our goal is to determine whether the distribution is uniform or far from uniform. I will also discuss lower bounds.

ThursdayMay 23, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Gregory Berkolaiko Title:Nodal statistics of graph eigenfunctionsAbstract:opens in new windowin html    pdfopens in new window

Understanding statistical properties of zeros of Laplacian eigenfunctions is a program which is attracting much attention from mathematicians and physicists. We will discuss this program in the setting of "quantum graphs", self-adjoint differential operators acting on functions living on a metric graph. Numerical studies of quantum graphs motivated a conjecture that the distribution of nodal surplus (a suitably rescaled number of zeros of the n-th eigenfunction) has a universal form: it approaches Gaussian as the number of cycles grows. The first step towards proving this conjecture is a result established for graphs which are composed of cycles separated by bridges. For such graphs we use the nodal-magnetic theorem of the speaker, Colin de Verdiere and Weyand to prove that the distribution of the nodal surplus is binomial with parameters p=1/2 and n equal to the number of cycles. Based on joint work with Lior Alon and Ram Band.

WednesdayMay 22, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Meir FederTitle:Universal Learning for Individual DataAbstract:opens in new windowin html    pdfopens in new window

Universal learning is considered from an information theoretic point of view following the universal prediction approach pursued in the 90's by F&Merhav. Interestingly, the extension to learning is not straight-forward. In previous works we considered on-line learning and supervised learning in a stochastic setting. Yet, the most challenging case is batch learning where prediction is done on a test sample once the entire training data is observed, in the individual setting where the features and labels, both training and test, are specific individual quantities. This work provides schemes that for any individual data compete with a "genie" (or a reference) that knows the true test label. It suggests design criteria and derive the corresponding universal learning schemes. The main proposed scheme is termed Predictive Normalized Maximum Likelihood (pNML). As demonstrated, pNML learning and its variations provide robust and "stable" learning solutions that outperform the current leading approach based on Empirical Risk Minimization (ERM). Furthermore, the pNML construction provides a pointwise indication for the learnability that measures the uncertainty in learning the specific test challenge with the given training examples; hence the learner knows when it does not know. The improved performance of the pNML, the induced learnability measure and its utilization are demonstrated in several learning problems including deep neural networks models. Joint work with Yaniv Fogel and Koby Bibas

MondayMay 20, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Guy Rothblum Title:Gentle Measurement of Quantum States and Differential PrivacyAbstract:opens in new windowin html    pdfopens in new window

We prove a new connection between gentle measurement (where one wants to measure n quantum states, in a way that damages the states only by a little) and differential privacy (where one wants to query a database about n users, in a way that reveals only a little about any individual user). The connection is bidirectional, though with loss of parameters in going from DP to gentle measurement. Exploiting this connection, we present a new algorithm for approximating the outcomes of many measurements on a collection of quantum states, a task called "shadow tomography". The new algorithm has the advantages of being gentle and online (the measurements can be chosen adaptively).

Joint work with Scott Aaronson.

No prior knowledge about quantum mechanics or computing will be assumed.

ThursdayMay 16, 201912:15
Vision and AIRoom 155
Speaker:Gilad Cohen Title:The Connection between DNNs and Classic Classifiers: Generalize, Memorize, or Both?Abstract:opens in new windowin html    pdfopens in new window

This work studies the relationship between the classification performed by deep neural networks (DNNs) and the decision of various classic classifiers, namely k-nearest neighbors (k-NN), support vector machines (SVM), and logistic regression (LR). This is studied at various layers of the network, providing us with new insights on the ability of DNNs to both memorize the training data and generalize to new data at the same time, where k-NN serves as the ideal estimator that perfectly memorizes the data.
First, we show that DNNs' generalization improves gradually along their layers and that memorization of non-generalizing networks happens only at the last layers.
We also observe that the behavior of DNNs compared to the linear classifiers SVM and LR is quite the same on the training and test data regardless of whether the network generalizes. On the other hand, the similarity to k-NN holds only at the absence of overfitting. This suggests that the k-NN behavior of the network on new data is a good sign of generalization. Moreover, this allows us to use existing k-NN theory for DNNs.
 

WednesdayMay 15, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Gal ElidanTitle:Learning Rules-first ClassifiersAbstract:opens in new windowin html    pdfopens in new window

Complex classifiers may exhibit "embarrassing" failures even in "easy" cases where humans can provide a simple justified explanation. Avoiding such failures is obviously of key importance. In this work, we focus on one such setting, where a label is perfectly predictable if the input contains certain features, or rules, and otherwise it is predictable by a linear classifier. We define a hypothesis class that captures this notion and determine its sample complexity. We also give evidence that efficient algorithms cannot achieve this sample complexity. We then derive a simple and efficient algorithm and show that its sample complexity is close to optimal, among efficient algorithms. Experiments on synthetic and sentiment analysis data demonstrate the efficacy of the method, both in terms of accuracy and interpretability. At the end of the talk, I will also give a teaser to demonstrate the non-intuitive nature of the more general problem of embarrassment.

MondayMay 13, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Nati Linial Title:On the large-scale geometry of graphsAbstract:opens in new windowin html    pdfopens in new window

A graph is automatically also a metric space, but is there anything interesting to say about such metric spaces? Many fascinating and concrete questions are encapsulated in the more general (and vague) question "to what extent can a finite graph emulate the properties of a infinite regular tree ?". We will see how this leads us to investigate expansion in graphs and questions about the large scale aspects of graph metrics including girth and diameter and the relations between the two. If time permits (probably not) I may also say a little about the local geometry of graphs.

This talk is based on many collaborations which I have had over the years, among my more recent collaborators are Michael Chapman, Yuval Peled and Yonatan Bilu. The talk requires no particular previous background and should be accessible to general mathematical audience.

TuesdayMay 07, 201916:15
Seminar in Geometry and TopologyRoom 1
Speaker:Sasha SkripchenkoTitle:Novikov's problem and Arnoux-Rauzy IETAbstract:opens in new windowin html    pdfopens in new window

In 1981 P.Arnoux and J.-C.Yoccoz constructed their famous foliation on a surface of genus three with zero flux. Later it was shown that this example can be generalized, and in particular that there is an interesting fractal set of parameters that give rise to the foliations with similar properties. This fractal was named in honour of Gerard Rauzy.
In my talk I will briefly discuss how such a family of foliations appeared in different branches of mathematics (symbolic dynamics, Teichmuller dynamics, low-dimensional topology, geometric group theory) and even in theoretical physics (conductivity theory in monocrystals) and explain what do we know about this family from ergodic point of view.
The talk is based on joint work with Pascal Hubert and Artur Avila and on a work in progress with Ivan Dynnikov and Pascal Hubert.
 

TuesdayMay 07, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Luc IllusieTitle:Remarks on liftings mod $p^2$ and the Nygaard filtrationAbstract:opens in new windowin html    pdfopens in new window

I'll revisit decompositions of de Rham complexes in positive characteristic (Deligne-Illusie), by discussing relations between cotangent complexes, liftings mod $p^2$, and de Rham-Witt and derived de Rham complexes. Such relations have been recently observed independently by Bhargav Bhatt.

MondayMay 06, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Nathan Keller Title:On linear threshold functions and local Chernoff inequalitiesAbstract:opens in new windowin html    pdfopens in new window

A linear threshold function (LTF) is a Boolean function f:{-1,1}^n -> {0,1} of the form f(x) = 1_{ \sum a_i x_i > t}, for some fixed coefficients a_i and some threshold t. LTFs play an important role in complexity theory, machine learning, and other fields.

In this talk we present a new approach that allows us obtaining sharp results on Fourier-theoretic properties of biased LTFs. In particular, we determine the exact asymptotic order of the total influence and of the degree-1 Fourier weight of any biased LTF, in terms of its maximal (normalized) coefficient and its expectation. This provides a sharp generalization of theorems proved by Matulef, O'Donnell, Rubinfeld, and Servedio (in the context of property testing), and settles a conjecture posed by Kalai et al.

Our main tools are 'local' forms of the classical Chernoff inequality, like the following one proved by Devroye and Lugosi (2008): Let {x_i} be independent random variables uniformly distributed in {-1, 1}, and let a_i be nonnegative numbers such that \sum a_i^2 =1. If for some t > 0, we have Pr [\sum a_i x_i > t] = b, then Pr[\sum a_i x_i > t + delta] < b/2 holds for delta < c/ \sqrt {log(1/b)}, where c is a universal constant. Such inequalities seem to be little-known and probably can be useful in other contexts as well.

 

Joint work with Ohad Klein.

TuesdayApr 30, 201916:15
Seminar in Geometry and TopologyRoom 1
Speaker:Tyler KellyTitle:Hypergeometric Decompositions of Symmetric Quartic K3 PencilsAbstract:opens in new windowin html    pdfopens in new window

We will consider two different types of hypergeometric decompositions of special data associated to five pencils of quartic surfaces. We see that over the complex numbers, the middle cohomology of these five pencils yield hypergeometric Picard-Fuchs equations. Using these parameters, we then consider the same pencils over finite fields, decomposing their rational point counts using the finite-field hypergeometric functions with the same parameters as above. This is joint work with Charles Doran, Adriana Salerno, Steven Sperber, John Voight, and Ursula Whitcher.

TuesdayApr 30, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Jesua EpequinTitle:Howe correspondance between Harish-Chandra seriesAbstract:opens in new windowin html    pdfopens in new window

MondayApr 29, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Carmit Hazay Title:Zero-Knowledge from MPC-in-the-Head: Constructions and ApplicationsAbstract:opens in new windowin html    pdfopens in new window

MPC-in-the-head is a novel paradigm introduced in the work of Ishai et al. [IshaiKOS09] and, roughly speaking, allows the design of a zero-knowledge proof system for any NP-relation by relying on any multiparty computation (MPC) protocol in a modular way. On a high-level, in this transformation, the prover emulates ``in-its-head'' an execution of an MPC protocol that securely evaluates the NP-relation on the witness and commits to the views of the parties induced by this run. The verifier then tests the veracity of the computation by challenging the prover to open (decommit) the views of a subset of the parties. The key insight in the compilation is that the soundness and zero-knowledge property directly reduces to the robustness and simulation guarantees of the underlying MPC protocol.

ThursdayApr 18, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Michał Strzelecki Title:On modified log-Sobolev inequalitiesAbstract:opens in new windowin html    pdfopens in new window

In order to prove concentration estimates for (products of) measures with heavier tails than the standard Gaussian measure one can use several variants of the classical log-Sobolev inequality, e.g., Beckner-type inequalities of Latala and Oleszkiewicz or modified log-Sobolev inequalities of Gentil, Guillin, and Miclo. The main result I plan to present asserts that a probability measure on R^d which satisfies the former inequality satisfies also the latter. Based on joint work with Franck Barthe.

TuesdayApr 16, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Adam Gal Title:Bernstein’s second adjointness and Higher Hall algebrasAbstract:opens in new windowin html    pdfopens in new window

When one considers the functor of parabolic induction in various contexts there arises immediately the question of the existence of left or right adjoints. For example in the p-adic setting there is a natural left adjoint, but it was shown by Bernstein that in fact there is also a right adjoint, and they turn out to be isomorphic - this phenomenon is called "second adjointness". We explain how second adjointness is directly related to a natural braiding on a categorified version of the Hall algebra and describe the interplay between the two settings and lay out a strategy of how this connection can help understand both sides better.

MondayApr 15, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Noga Ron-ZewiTitle:Improved List Decoding of Algebraic CodesAbstract:opens in new windowin html    pdfopens in new window
We show that Folded Reed-Solomon codes achieve list decoding capacity with constant list sizes, independent of the block length. Prior work yielded list sizes that are polynomial in the block length, and relied on elaborate subspace evasive machinery to reduce the list sizes to constant. We further show that multiplicity codes exhibit similar behavior, and use this to obtain capacity achieving locally list decodable codes with query complexity significantly lower than was known before. Based on joint work with Swastik Kopparty, Shubhangi Saraf, and Mary Wootters.
ThursdayApr 11, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Lisa Hartung Title:From 1 to 6 in branching Brownian motionAbstract:opens in new windowin html    pdfopens in new window

 Brownian motion is a classical process in probability theory belonging to the class of  "Log-correlated random fields". It is well known do to Bramson that the order of the maximum has a different logarithmic correction as the corresponding independent setting.
In this talk we look at a version of branching Brownian motion where we slightly vary the diffusion parameter in a way that, when looking at the order of the maximum, we can smoothly interpolate between the logarithmic correction for independent random variables ($\frac{1}{2\sqrt 2}\ln(t)$) and the logarithmic correction of BBM ($\frac{3}{2\sqrt 2}\ln(t)$) and the logarithmic correction of 2-speed BBM with increasing variances ($\frac{6}{2\sqrt 2}\ln(t)$). We also establish in all cases the asymptotic law of the maximum and characterise the extremal process, which turns out to coincide essentially with that of standard BBM. We will see that the key to the above results is a precise understanding of the entropic repulsion experienced by an extremal particle. (joint work with A. Bovier)

ThursdayApr 11, 201912:15
Vision and AIRoom 1
Speaker:Leonid KarlinskyTitle:Few-Shot Object XAbstract:opens in new windowin html    pdfopens in new window
Learning to classify and localize instances of objects that belong to new categories, while training on just one or very few examples, is a long-standing challenge in modern computer vision. This problem is generally referred to as 'few-shot learning'. It is particularly challenging for modern deep-learning based methods, which tend to be notoriously hungry for training data. In this talk I will cover several of our recent research papers offering advances on these problems using example synthesis (hallucination) and metric learning techniques and achieving state-of-the-art results on known and new few-shot benchmarks. In addition to covering the relatively well studied few-shot classification task, I will show how our approaches can address the yet under-studied few-shot localization and multi-label few-shot classification tasks.
WednesdayApr 10, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Yossi KeshetTitle:Deep Learning: Vulnerabilities, Defenses and BeyondAbstract:opens in new windowin html    pdfopens in new window

Deep learning has been amongst the most emerging fields in computer science and engineering. In recent years it has been shown that deep networks are vulnerable to attacks by adversarial examples. I will introduce a novel flexible approach named Houdini for generating adversarial examples for complex and structured tasks and demonstrate successful attacks on different applications such as speech recognition, pose estimation, semantic image segmentation, speaker verification, and malware detection. Then I will discuss how this weakness can be turned into two secure applications. The first is a new technique for watermarking deep network models in a black-box way. That is, concealing information within the model that can be used by the owner of the model to claim ownership. The second application is a novel method to Speech Steganography, namely hiding a secret spoken message within an ordinary public spoken message. I will conclude the talk by a brief discussion of our attempts to detect such adversarial attacks, based on multiple semantic label representations.

MondayApr 08, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Alex Grilo Title:Stoquastic PCP vs. RandomnessAbstract:opens in new windowin html    pdfopens in new window

The derandomization of MA, the probabilistic version of NP, is a long standing open question. In this talk, we connect this problem to a variant of another major problem: the quantum PCP conjecture. Our connection goes through the surprising quantum characterization of MA by Bravyi and Terhal. They proved the MA-completeness of the problem of deciding whether the groundenergy of a uniform stoquastic local Hamiltonian is zero or inverse polynomial. We show that the gapped version of this problem, i.e. deciding if a given uniform stoquastic local Hamiltonian is frustration-free or has energy at least some constant, is in NP. Thus, if there exists a gap-amplification procedure for uniform stoquastic Local Hamiltonians (in analogy to the gap amplification procedure for constraint satisfaction problems in the original PCP theorem), then MA = NP (and vice versa). Furthermore, if this gap amplification procedure exhibits some additional (natural) properties, then P = RP.  This is a joint work with Dorit Aharonov.

ThursdayApr 04, 201912:15
Vision and AIRoom 1
Speaker:Tom Tirer Title:Image Restoration by Iterative Denoising and Backward ProjectionsAbstract:opens in new windowin html    pdfopens in new window

Inverse problems appear in many applications, such as image deblurring, inpainting and super-resolution. The common approach to address them is to design a specific algorithm (or recently - a deep neural network) for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this work, we propose an alternative method for solving inverse problems using off-the-shelf denoisers, which requires less parameter tuning (can be also translated into less pre-trained denoising neural networks). First, we transform a typical cost function, composed of fidelity and prior terms, into a closely related, novel optimization problem. Then, we propose an efficient minimization scheme with a plug-and-play property, i.e., the prior term is handled solely by a denoising operation. Finally, we present an automatic tuning mechanism to set the method's parameters. We provide a theoretical analysis of the method, and empirically demonstrate its impressive results for image inpainting, deblurring and super-resolution. For the latter, we also present an image-adaptive learning approach that further improves the results.

MondayApr 01, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Zihan Tan Title:Improved Bounds for the Excluded Grid TheoremAbstract:opens in new windowin html    pdfopens in new window

We study the Excluded Grid Theorem, a fundamental structural result in graph theory, that was proved by Robertson and Seymour in their seminal work on graph minors. The theorem states that there is a function f, such that for every integer g > 0, every graph of treewidth at least f(g) contains the (gxg)-grid as a minor. For every integer g>0, let f(g) be the smallest value for which the theorem holds. Establishing tight bounds on f(g) is an important graph-theoretic question. Robertson and Seymour showed that f(g) is at least of order g^2 log g). For a long time, the best known upper bounds on f(g) were super-exponential in g. The first polynomial upper bound of f(g) = O(g^98 poly log g) was proved by Chekuri and Chuzhoy. It was later improved to f(g) = O(g^36 poly log g), and then to f(g) = O(g^19 poly log g). In this talk we present our recent work that further improves this bound to f(g) = O(g^9 poly log g) via a simpler proof. Moreover, while there are natural barriers that seem to prevent the previous methods from yielding tight bounds for the theorem, it seems conceivable that the techniques proposed in this thesis can lead to even tighter bounds on f(g).

SundayMar 31, 201915:00
Seminar in Geometry and TopologyRoom 155
Speaker:Slava KharlamovTitle:Segre indices, Welschinger weights, and an invariant signed count of real lines on real projective hypersurfaces. Abstract:opens in new windowin html    pdfopens in new windowNOTE UNUSUAL DAY AND TIME

As it was observed a few years ago, there exists a certain signed count of real lines on real projective hypersurfaces of degree 2n+1 and dimension n that,  contrary to the honest "cardinal" count, is independent of the choice of a hypersurface,  and by this reason provides, as a consequence, a strong lower bound on the honest count.  Originally, in this invariant signed count the input of a line was given by its local contribution to the Euler number of an appropriate auxiliary universal vector bundle.  

The aim of the talk is to present other, in a sense more geometric, interpretations of the signs involved in the invariant count. In particular, this provides certain generalizations of Segre indices of real lines on cubic surfaces and Welschinger-Solomon weights of real lines on quintic threefolds.

This is a joint work with S.Finashin. 

ThursdayMar 28, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Ami Viselter Title:Convolution semigroups and generating functionals on quantum groupsAbstract:opens in new windowin html    pdfopens in new window

The theory of locally compact quantum groups grew out of the need to extend Pontryagin's duality for locally compact abelian groups to a wider class of objects, as well as from a modern "quantum" point of view suggesting the replacement of some algebras of functions on a group by non-commutative objects, namely operator algebras. In this talk, which will be split into two parts, we will show how several fundamental notions from probability and geometric group theory fit in this framework.

The first part will be an introduction to locally compact quantum groups. We will present the rationale and the definitions, give examples, and explain how the theory is related to other branches of math. If time permits, we will also touch upon more specific notions related to the second part.

In the second part we will discuss convolution semigroups of states, as well as generating functionals, on locally compact quantum groups. One type of examples comes from probability: the family of distributions of a L\'evy process form a convolution semigroup, which in turn admits a natural generating functional. Another type of examples comes from (locally compact) group theory, involving semigroups of positive-definite functions and conditionally negative-definite functions, which provide
important information about the group's geometry. We will explain how these notions are related and how all this extends to the quantum world; derive geometric characterizations of two approximation properties of locally compact quantum groups; see how generating functionals may be (re)constructed and study their domains; and indicate how our results can be used to study cocycles.

Based on joint work with Adam Skalski.

No background in operator algebras will be assumed.

ThursdayMar 28, 201912:15
Vision and AIRoom 1
Speaker:Yizhak Ben-ShabatTitle:Classification, Segmentation, and Normal Estimation of 3D Point Clouds using Deep Learning Abstract:opens in new windowin html    pdfopens in new window

Modern robotic and vision systems are often equipped with a direct 3D data acquisition device, e.g. a LiDAR or RGBD camera, which provides a rich 3D point cloud representation of the surroundings. Point clouds have been used successfully for localization and mapping tasks, but their use in semantic understanding has not been fully explored. Recent advances in deep learning methods for images along with the growing availability of 3D point cloud data have fostered the development of new 3D deep learning methods that use point clouds for semantic understanding. However, their unstructured and unordered nature make them an unnatural input to deep learning methods. In this work we propose solutions to three semantic understanding and geometric processing tasks: point cloud classification, segmentation, and normal estimation. We first propose a new global representation for point clouds called the 3D Modified Fisher Vector (3DmFV). The representation is structured and independent of order and sample size. As such, it can be used with 3DmFV-Net, a newly designed 3D CNN architecture for classification. The representation introduces a conceptual change for processing point clouds by using a global and structured spatial distribution. We demonstrate the classification performance on the ModelNet40 CAD dataset and the Sydney outdoor dataset obtained by LiDAR. We then extend the architecture to solve a part segmentation task by performing per point classification. The results here are demonstrated on the ShapeNet dataset. We use the proposed representation to solve a fundamental and practical geometric processing problem of normal estimation using a new 3D CNN (Nesti-Net). To that end, we propose a local multi-scale representation called Multi Scale Point Statistics (MuPS) and show that using structured spatial distributions is also as effective for local multi-scale analysis as for global analysis. We further show that multi-scale data integrates well with a Mixture of Experts (MoE) architecture. The MoE enables the use of semi-supervised scale prediction to determine the appropriate scale for a given local geometry. We evaluate our method on the PCPNet dataset. For all methods we achieved state-of-the-art performance without using an end-to-end learning approach.

WednesdayMar 27, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Roi Livni Title:Passing Tests Without Memorizing Abstract:opens in new windowin html    pdfopens in new window

Generative Adversarial Networks (GANs) is a recent algorithmic framework that has won considerable attention.  In a nutshell, GANs receive as input an IID sample and outputs synthetic data that should resemble data from the true underlying distribution. For example, consider an algorithm that receives as input some tunes from a specific music genre (e.g. jazz, rock, pop) and then outputs a new, original, tune from that genre.  

From a theoretical perspective, the distinction between algorithms that genuinely generate original new examples vs. algorithms that perform naive manipulations (or even merely memorization) of the input sample is an elusive distinction. This makes the theoretical analysis of GANs algorithms challenging. 

In this work we introduce two mathematical frameworks for the task of generating synthetic data. The first model we consider is inspired by GANs, and the learning algorithm has only an indirect access to the target distribution via a discriminator. The second model, called DP-Foolability, exploits the notion of differential privacy as a criterion for "non-memorization".

We characterize learnability in each of these models as well as discuss the interrelations. As an application we prove that privately PAC learnable classes are DP-foolable.  As we will discuss, this can be seen as an analogue of the equivalence between uniform convergence and learnability in classical PAC learning.

Joint work with Olivier Bousquet and Shay Moran.
https://arxiv.org/pdf/1902.03468.pdf

TuesdayMar 26, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Jonathan Wang Title:The Drinfeld-Gaitsgory operator on automorphic functionsAbstract:opens in new windowin html    pdfopens in new window

Let F be a function field and G a connected split reductive group over F. We define a "strange" operator between different spaces of automorphic functions on G(A)/G(F), and show that this operator is natural from the viewpoint of the geometric Langlands program via the functions-sheaves dictionary. We discuss how to define this operator over a number field by relating it to pseudo-Eisenstein series and inversion of the standard intertwining operator. This operator is also connected to Deligne-Lusztig duality and cohomological duality of representations over a local field.

TuesdayMar 26, 201911:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Amir Sagiv Title:Prediction of random and chaotic dynamics in nonlinear opticsAbstract:opens in new windowin html    pdfopens in new window

The prediction of interactions between nonlinear laser beams is a longstanding open problem. A traditional assumption is that these interactions are deterministic. We have shown, however, that in the nonlinear Schrodinger equation (NLS) model of laser propagation, beams lose their initial phase information in the presence of input noise. Thus, the interactions between beams become unpredictable as well. Not all is lost, however. The statistics of many interactions are predictable by a universal model.

Computationally, the universal model is efficiently solved using a novel spline-based stochastic computational method. Our algorithm efficiently estimates probability density functions (PDF) that result from differential equations with random input. This is a new and general problem in numerical uncertainty-quantification (UQ), which leads to surprising results and analysis at the intersection of probability and approximation theory.

MondayMar 25, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Yuval IshaiTitle:Homomorphic Secret Sharing Abstract:opens in new windowin html    pdfopens in new window

A homomorphic secret-sharing scheme is a secret-sharing scheme that allows locally mapping shares of a secret to compact shares of a function of the secret. The talk will survey the current state of the art on homomorphic secret sharing, covering efficient constructions, applications in cryptography and complexity theory, and open questions.

ThursdayMar 21, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Zemer Kosloff Title:On the local limit theorem in dynamical systemsAbstract:opens in new windowin html    pdfopens in new window
In 1987, Burton and Denker proved the remarkable result that in every aperiodic dynamical systems (including irrational rotations for example) there is a square integrable, zero mean function such that its corresponding time series satisfies a CLT. Subsequently, Volny showed that one can find a function which satisfies the strong (almost sure) invariance principle. All these constructions resulted in a non-lattice distribution. In a joint work with Dalibor Volny we show that there exists an integer valued cocycle which satisfies the local limit theorem. The first hour will involve painting (Rokhlin towers) while the second one will be mainly concerned with the proof of the local CLT.
ThursdayMar 14, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Marta Strzelecka Title:On k-maxima of log-concave vectorsAbstract:opens in new windowin html    pdfopens in new window

We establish two-sided bounds for expectations of order statistics (k-th maxima) of moduli of coordinates of centred log-concave random vectors with uncorrelated coordinates. Our bounds are exact up to multiplicative universal constants in the unconditional case for all k and in the isotropic case for k < c n^{5/6}. We also derive two-sided estimates for expectations of sums of k largest moduli of coordinates for some classes of random vectors. The talk will be based on the joint work with Rafal Latala.  

WednesdayMar 13, 201911:00
Seminar in Geometry and TopologyRoom 155
Speaker:Guillaume TaharTitle:Meromorphic differentials with prescribed singularities : a refinement of a classical Mittag-Leffler theoremAbstract:opens in new windowin html    pdfopens in new window NOTE THE UNUSUAL DAY

A classical theorem of Mittag-Leffler asserts that in a given Riemann surface X, for any pattern of multiplicities of poles and any configuration of residues (summing to zero), there is a meromorphic 1-form on X that realize them. The only obstruction is that residues at simple poles should be nonzero. 

If we require that the multiplicity of the zeroes is also prescribed, the problem can be reformulated in terms of strata of meromorphic differentials. Using the dictionary between complex analysis and flat geometry, we are able to provide a complete characterization of configurations of residues that are realized for a given pattern of singularities. Two nontrivial obstructions appear concerning the combinatorics of the multiplicity of zeroes and the arithmetics of the residues. This is a joint work with Quentin Gendron.

ThursdayMar 07, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Ori Gurel-Gurevich Title:Random walks on planar graphs Abstract:opens in new windowin html    pdfopens in new window

We will discuss several results relating the behavior of a random walk on a planar graph and the geometric properties of a nice embedding of the graph in the plane (specifically, circle packing of the graph). One such a result is that for a bounded degree graph, the simple random walk is recurrent if and only if the boundary of the nice embedding is a polar set (that is, Brownian motion misses it almost surely). If the degrees are unbounded, this is no longer true, but for the case of circle packing of a triangulation, there are weights which are obtained naturally from the circle packing, such that when the boundary is polar, the weighted random walk is recurrent (we believe the converse also hold). These weights arise also in the context of discrete holomorphic and harmonic functions, a discrete analog of complex holomorphic functions. We show that as the sizes of circles, or more generally, the lengths of edges in the nice embedding of the graph tend to zero, the discrete harmonic functions converge to their continuous counterpart with the same boundary conditions. Equivalently, that the exit measure of the weighted random walk converges to the exit measure of standard Brownian motion. This improves previous results of Skopenkov 2013 and Werness 2015, who proves similar results under additional local and global assumptions on the embedding. In particular, we make no assumptions on the degrees of the graph, making the result applicable to models of random planar maps.

Based of joint works with Daniel Jerison, Asaf Nachmias, Matan Seidel and Juan Souto.

ThursdayFeb 28, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Fanny Augeri (WIS) + Elliot Paquette (Ohio)Title:DOUBLE SEMINARAbstract:opens in new windowin html    pdfopens in new windowDOUBLE SEMINAR

Speaker 1: Fanny Augeri (WIS)
Title : Nonlinear large deviations bounds with applications to sparse Erdos-Renyi graphs.
Abstract: In this talk, I will present the framework of the so-called nonlinear large deviations introduced by Chatterjee and Dembo. In a seminal paper, they provided a sufficient criterion in order that the large deviations of a function on the discrete hypercube to be due by only changing the mean of the background measure. This sufficient condition was formulated in terms of the complexity of the gradient of the function of interest. I will present general nonlinear large deviation estimates similar to Chatterjee-Dembo's original bounds except that we do not require any second order smoothness. The approach relies on convex analysis arguments and is valid for a broad class of distributions. Then, I will detail an application of this nonlinear large deviations bounds to the problem of estimating the upper tail of cycles counts in sparse Erdos-Renyi graphs down to the connectivity parameter $n^{-1/2}$.

Speaker 2: Elliot Paquette (Ohio)
Title: The Gaussian analytic function is either bounded or covers the plane
Abstract: The Gaussian analytic function (GAF) is a power series with independent Gaussian coefficients. In the case that this power series has radius of convergence 1, under mild regularity assumptions on the coefficients, it is a classical theorem that the power series is a.s. bounded on open disk if and only if it extends continuously to a function on the closed unit disk a.s. Nonetheless, there exists a natural range of coefficients in which the GAF has boundary values in L-p, but is a.s. unbounded. How wild are these boundary values? Well, Kahane asked if a GAF either a.s. extends continuously to the closed disk or a.s. has range covering the whole plane. We show partial progress in establishing this in the affirmative.
Joint with Alon Nishry.

TuesdayFeb 26, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Doron Gepner Title:On the algebraic structure of solvable lattice models and knot theoryAbstract:opens in new windowin html    pdfopens in new window

TBA

MondayFeb 25, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Thatchaphol Saranurak Title:Expander decomposition: fast static and dynamic algorithmsAbstract:opens in new windowin html    pdfopens in new window

An (\epsilon,\phi)-expander decomposition of a graph G=(V,E) with m edges is a partition of vertices into clusters such that (1) each cluster induces subgraph with conductance at least \phi, and (2) the number of inter-cluster edges is at most \epsilon m. This decomposition has a wide range of applications including approximation algorithms for the unique game, fast algorithms for flow and cut problems, and dynamic graph algorithms.

I will describe a new algorithm for constructing (~O(\phi),\phi)-expander decomposition in ~O(m/\phi) time. This is the first nearly linear time algorithm when \phi is at least 1/polylog(m), which is the case in most practical settings and theoretical applications. Previous results either take \Omega(m^{1+o(1)}) time, or attain nearly linear time but with a weaker expansion guarantee where each output cluster is guaranteed to be contained inside some unknown expander.

Our technique can be easily extended to the dynamic setting where the graph undergoing updates. This talk is based on joint work with Di Wang [Saranurak Wang SODA'19].

ThursdayFeb 21, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Houcein Abdalaoui Title:On Banach problem and simple Lebesgue spectrumAbstract:opens in new windowin html    pdfopens in new window

Following Ulam, Banach asked on the existence of dynamical system on real line with simple Lebesgue spectrum. I discuss the connection of this problem to the famous Banach problem in ergodic theory due essentially to Rohklin. I will further present my recent contribution to this problem and the connection to so called Erdos flat polynomials problem in Harmonic analysis due to J. Bourgain . My talk is based on my recent work and joint work with Mahendra Nadkarni (CBS Mumbai, India).

MondayFeb 18, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Shachar Carmeli Title:Ambidexterity in Chromatic Homotopy Theory Abstract:opens in new windowin html    pdfopens in new windowPart 1 at 11:15-12:30 Part 2 at 13:00-14:00

Poincare duality provides an isomorphism between the homology and cohomology of a compact manifold, up to a shift. For \pi-finite spaces, i.e. spaces with finitely many non-zero homotopy groups, all of which are finite, there is a similar duality only for Q-coefficients, but no such duality exists with F_p coefficients. However, as shown by Michael Hopkins and Jacob Lurie, there is a duality between the homology and cohomology of \pi-finite spaces with coefficients in some extra-ordinary cohomology theories called Morava K-theories. This property of Morava K-theory is called ambidexterity.I will explain what is ambidexterity, some of its consequences and our contribution to the subject.

This is a joint work with Lior Yanovski and Tomer Schlank.

ThursdayFeb 14, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Rafael Butez Title:On the extremal particles of a Coulomb gas and random polynomialsAbstract:opens in new windowin html    pdfopens in new window

The purpose of this talk is to understand the behavior of the extremal zeros of random polynomials of the form $ P_N(z) = \sum_{k=0}^{N} a_k R_k(z)$ where the family $(R_k)_{k \leq N}$ is an orthonormal basis for the scalar product $\langle P,Q \rangle = \int P(z) \overline{Q(z)} e^{-2N V^{\nu(z)}} d\nu(z)$ with $\nu$ a radial probability measure on $\CC$ and $V^{\nu}(z)= \int \log |z-w|d\nu(w)$.

Although the zeros of these polynomials spread like the measure $\nu$, the zeros of maximum modulus lie outsite of the support. More precisely, the point process of the roots outside of the support of the equilibrium measure converges towards the Bergman point process of the complement of the support.

We also study similar results on a model of Coulomb gases in dimension $2$ where the confining potential is generated by the presence of a fixed background of positive charges. If $\nu$ is a probability measure, we study the system of particles with joint distribution on $\CC^N$, $\frac{1}{Z_N} \prod_{i \leq j} |x_i-x_j|^2 e^{-2(N+1)\sum_{k=1}^{N}V^{\nu}(x_k)} d\ell_{\CC^N}(x_1,\dots,x_N).$ This model in closely related to the study of the zeros of random polynomials. We show that the extremal particles of this Coulomb gas present a similar behavior to the random polynomial setting.

All the results mentioned above are done in collaboration with David Garcia-Zelada.

ThursdayFeb 07, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Tsviqa Lakrec Title:Scenery Reconstruction for a Random Walk on Random Scenery with Adversarial Error InsertionAbstract:opens in new windowin html    pdfopens in new window

Consider a simple random walk on $\mathbb{Z}$ with a random coloring of $\mathbb{Z}$. Look at the sequence of the first $N$ steps taken and colors of the visited locations. From it, you can deduce the coloring of approximately $\sqrt{N}$ integers. Suppose an adversary may change $\delta N$ entries in that sequence. What can be deduced now? We show that for any $\theta<0.5,p>0$, there are $N_{0},\delta_{0}$ such that if $N>N_{0}$ and $\delta<\delta_{0}$ then with probability $>1-p$ we can reconstruct the coloring of $>N^{\theta}$ integers.

ThursdayFeb 07, 201912:15
Vision and AIRoom 1
Speaker:Tavi HalperinTitle:Using visual and auditory cues for audio enhancement Abstract:opens in new windowin html    pdfopens in new window

In this talk I will present two recent works:

1) Dynamic Temporal Alignment of Speech to Lips

Many speech segments in movies are re-recorded in a studio during post-production, to compensate for poor sound quality as recorded on location. Manual alignment of the newly-recorded speech with the original lip movements is a tedious task. We present an audio-to-video alignment method for automating speech to lips alignment, stretching and compressing the audio signal to match the lip movements. This alignment is based on deep audio-visual features, mapping the lips video and the speech signal to a shared representation. Using this shared representation we compute the lip-sync error between every short speech period and every video frame, followed by the determination of the optimal corresponding frame for each short sound period over the entire video clip. We demonstrate successful alignment both quantitatively, using a human perception-inspired metric, as well as qualitatively. The strongest advantage of our audio-to-video approach is in cases where the original voice is unclear, and where a constant shift of the sound can not give perfect alignment. In these cases, state-of-the-art methods will fail.

2) Neural separation of observed and unobserved distributions

Separating mixed distributions is a long standing challenge for machine learning and signal processing. Most current methods either rely on making strong assumptions on the source distributions or rely on having training samples of each source in the mixture. In this work, we introduce a new method - Neural Egg Separation - to tackle the scenario of extracting a signal from an unobserved distribution additively mixed with a signal from an observed distribution. Our method iteratively learns to separate the known distribution from progressively finer estimates of the unknown distribution. In some settings, Neural Egg Separation is initialization sensitive, we therefore introduce Latent Mixture Masking which ensures a good initialization. Extensive experiments on audio and image separation tasks show that our method outperforms current methods that use the same level of supervision, and often achieves similar performance to full supervision.

TuesdayFeb 05, 201916:15
Seminar in Geometry and TopologyRoom 1
Speaker:Solomon (Soli) VishkautsanTitle:The field of definition of the iterates of a rational functionAbstract:opens in new windowin html    pdfopens in new window

We will discuss the field of definition of a rational function and in what ways it can change under iteration, in particular when the degree over the base field drops. We present two families of rational functions with the property above, and prove that in the special case of polynomials, only one of these families is possible. We also explain how this relates to Ritt's decomposition theorem on polynomials. Joint work with Francesco Veneziano (SNS Pisa).

TuesdayFeb 05, 201911:00
The Chaim Leib Pekeris Memorial Lecture
Speaker:Tim RoughgardenTitle:How Computer Science Informs Modern Auction DesignAbstract:opens in new windowin html    pdfopens in new windowDolfi and Lola Ebner Auditorium
Over the last twenty years, computer science has relied on concepts borrowed from game theory and economics to reason about applications ranging from internet routing to real-time auctions for online advertising. More recently, ideas have increasingly flowed in the opposite direction, with concepts and techniques from computer science beginning to influence economic theory and practice. In this lecture, Tim Roughgarden will illustrate this point with a detailed case study of the 2016-2017 Federal Communications Commission incentive auction for repurposing wireless spectrum. Computer science techniques, ranging from algorithms for NP-hard problems to nondeterministic communication complexity, have played a critical role both in the design of the reverse auction (with the government procuring existing licenses from television broadcasters) and in the analysis of the forward auction (when the procured licenses sell to the highest bidder).
MondayFeb 04, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Eylon Yogev Title:The Power of Distributed Verifiers in Interactive Proofs Abstract:opens in new windowin html    pdfopens in new window

We explore the power of interactive proofs with a distributed verifier. In this setting, the verifier consists of n nodes and a graph G that defines their communication pattern. The prover is a single entity that communicates with all nodes by short messages. The goal is to verify that the graph G belongs to some language in a small number of rounds, and with small communication bound, i.e., the proof size.

This interactive model was introduced by Kol, Oshman and Saxena (PODC 2018) as a generalization of non-interactive distributed proofs. They demonstrated the power of interaction in this setting by constructing protocols for problems as Graph Symmetry and Graph Non-Isomorphism -- both of which require proofs of (n^2)-bits without interaction.

In this work, we provide a new general framework for distributed interactive proofs that allows one to translate standard interactive protocols (i.e., with a centralized verifier) to ones where the verifier is distributed with a proof size that depends on the computational complexity of the verification algorithm run by the centralized verifier. We show the following:

* Every (centralized) computation performed in time O(n) on a RAM can be translated into three-round distributed interactive protocol with O(log n) proof size. This implies that many graph problems for sparse graphs have succinct proofs (e.g., testing planarity).

* Every (centralized) computation implemented by either a small space or by uniform NC circuit can be translated into a distributed protocol with O(1) rounds and O(log n) bits proof size for the low space case and polylog(n) many rounds and proof size for NC.

* We show that for Graph Non-Isomorphism, one of the striking demonstrations of the power of interaction, there is a 4-round protocol with O(log n) proof size, improving upon the O(n*log n) proof size of Kol et al.

* For many problems, we show how to reduce proof size below the seemingly natural barrier of log n. By employing our RAM compiler, we get a 5-round protocol with proof size O(loglog n) for a family of problems including Fixed Automorphism, Clique and Leader Election (for the latter two problems we actually get O(1) proof size).

* Finally, we discuss how to make these proofs non-interactive {\em arguments} via random oracles.

Our compilers capture many natural problems and demonstrate the difficulty in showing lower bounds in these regimes.
Joint work with Moni Naor and Merav Parter.

SundayFeb 03, 201913:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Fermi Ma Title:New Techniques for Obfuscating ConjunctionsAbstract:opens in new windowin html    pdfopens in new windowSPECIAL CRYPTOGRAPHY SEMINAR

A conjunction over a binary alphabet is a boolean function specified by a length n pattern of 0's, 1's and wildcards. On input bit strings of length n, the function outputs 1 if the input matches the pattern at all non wildcard positions. At CRYPTO 2018, Bishop et al. proposed a simple and elegant construction to obfuscate this class of functions by embedding the pattern in the error positions of a noisy Reed-Solomon codeword, and placing the codeword in a group exponent. They prove their scheme achieves a notion of security called "distributional virtual black box" in the generic group model for random conjunctions with at most 0.774n wildcards.
In this talk, I'll show how to abstract the Bishop et al. scheme to obtain a significantly more efficient "dual" scheme. In the generic group model, our scheme admits an intuitive proof of security and does not place any restrictions on the number of wildcards.
Next, I'll describe a simple modification to the construction that avoids encoding in a group exponent and is secure under the Learning Parity with Noise (LPN) assumption. At the heart of our security proof is a reduction from standard LPN to LPN with "structured error."

No prior knowledge of the Bishop et al. paper will be assumed.

ThursdayJan 31, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Dan Mikulincer Title:Quantitative high-dimensional CLTs via martingale embeddingsAbstract:opens in new windowin html    pdfopens in new window

We introduce a new method for obtaining quantitative convergence rates for the central limit theorem (CLT) in the high dimensional regime. The method is based on the notion of a martingale embedding, a multivariate analogue of Skorokhod's embedding. Using the method we are able to obtain several new bounds for convergence in transportation distance and in entropy, and in particular: (a) We improve the best known bound, for convergence in quadratic Wasserstein transportation distance for bounded random vectors; (b) We derive a non-asymptotic convergence rate for the entropic CLT in arbitrary dimension, for log-concave random vectors; (c) We give an improved bound for convergence in transportation distance under a log-concavity assumption and improvements for both metrics under the assumption of strong log-concavity.

In this talk, we will review the method, and explain how one might use it in order to prove quantitative statements about rates of convergence.

ThursdayJan 31, 201912:15
Vision and AIRoom 1
Speaker:Heli Ben-Hamu Title:Multi-chart Generative Surface Modeling Abstract:opens in new windowin html    pdfopens in new window

We introduce a 3D shape generative model based on deep neural networks. A new image-like (i.e., tensor) data representation for genus-zero 3D shapes is devised. It is based on the observation that complicated shapes can be well represented by multiple parameterizations (charts), each focusing on a different part of the shape.

The new tensor data representation is used as input to Generative Adversarial Networks for the task of 3D shape generation. The 3D shape tensor representation is based on a multichart structure that enjoys a shape covering property and scale-translation rigidity. Scale-translation rigidity facilitates high quality 3D shape learning and guarantees unique reconstruction.  The multi-chart structure uses as input adataset of 3D shapes (with arbitrary connectivity) and a sparse correspondence between them.

The output of our algorithm is a generative model that learns the shape distribution and is able to generate novel shapes, interpolate shapes, and explore the generated shape space. The effectiveness of the method is demonstrated for the task of anatomic shape generation including human body and bone (teeth) shape generation.

WednesdayJan 30, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Ronen TalmonTitle: Data Analysis with the Riemannian Geometry of Symmetric Positive-Definite MatricesAbstract:opens in new windowin html    pdfopens in new window

Recently, the Riemannian geometry of the space of symmetric positive-definite matrices has become a central ingredient in a broad range of data analysis and learning tools. Broadly, it enables to observe complex high-dimensional data through the lens of objects with a known non-Euclidean geometry. In this talk, I will present a method for domain adaptation using parallel transport on the cone manifold of symmetric positive-definite matrices. Using Riemannian geometry, I will show the theoretical guarantees and discuss the benefits of the presented method. In addition, I will demonstrate these benefits on simulations as well as on real-measured data. While the majority of the talk will be focused on the particular symmetric positive-definite covariance matrices, I will present generalizations to kernels, diffusion operators, and graph-Laplacians and show intriguing properties using spectral analysis with application to source separation and multimodal data fusion.
* Joint work with Or Yair, Ori Katz, and Miri Ben-Chen

MondayJan 28, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Shiri Chechik Title:Near-Optimal Approximate Decremental All Pairs Shortest PathsAbstract:opens in new windowin html    pdfopens in new window

Computing shortest paths is one of the fundamental problems of graph algorithms. The goal of *dynamic* all pairs shortest paths (APSP) is to maintain shortest path trees from all vertices as the edges of the graph change over time. The algorithm is said to be decremental if it handles only deletions, incremental if it handles only insertions and fully dynamic if it handles both deletions and insertions.

In this talk I will present a near optimal decremental algorithm that maintains approximate all pairs shortest paths.

ThursdayJan 24, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Noam Lifshhitz Title:Sharp thresholds for sparse functions with applications to extremal combinatorics. Abstract:opens in new windowin html    pdfopens in new window

The sharp threshold phenomenon is a central topic of research in the analysis of Boolean functions. Here, one aims to give sufficient conditions for a monotone Boolean function f to satisfy $\mu p(f)=o(\mu q(f))$, where $q = p + o(p)$, and $\mu p(f)$ is the probability that $f=1$ on an input with independent coordinates, each taking the value $1$ with probability $p$. The dense regime, where $\mu p(f)=\Theta(1)$, is somewhat understood due to seminal works by Bourgain, Friedgut, Hatami, and Kalai. On the other hand, the sparse regime where $\mu p(f)=o(1)$ was out of reach of the available methods. However, the potential power of the sparse regime was envisioned by Kahn and Kalai already in 2006. In this talk we show that if a monotone Boolean function $f$ with $\mu p(f)=o(1)$ satisfies some mild pseudo-randomness conditions then it exhibits a sharp threshold in the interval $[p,q]$, with $q = p+o(p)$. More specifically, our mild pseudo-randomness hypothesis is that the $p$-biased measure of $f$ does not bump up to $\Theta(1)$ whenever we restrict $f$ to a sub-cube of constant co-dimension, and our conclusion is that we can find $q=p+o(p)$, such that $\mu p(f)=o(\mu q(f))$ At its core, this theorem stems from a novel hypercontactive theorem for Boolean functions satisfying pseudorandom conditions, which we call `small generalized influences'. This result takes on the role of the usual hypercontractivity theorem, but is significantly more effective in the regime where $p = o(1)$. We demonstrate the power of our sharp threshold result by reproving the recent breakthrough result of Frankl on the celebrated Erdos matching conjecture, and by proving conjectures of Huang--Loh--Sudakov and Furedi--Jiang for a new wide range of the parameters. Based on a joint work with Keevash, Long, and Minzer.

ThursdayJan 24, 201912:15
Vision and AIRoom 1
Speaker:Idan Kligvasser Title:xUnit: Learning a Spatial Activation Function for Efficient Visual InferenceAbstract:opens in new windowin html    pdfopens in new window

In recent years, deep neural networks (DNNs) achieved unprecedented performance in many vision tasks. However, state-of-the-art results are typically achieved by very deep networks, which can reach tens of layers with tens of millions of parameters. To make DNNs implementable on platforms with limited resources, it is necessary to weaken the tradeoff between performance and efficiency. In this work, we propose a new activation unit, which is suitable for both high and low level vision problems. In contrast to the widespread per-pixel activation units, like ReLUs and sigmoids, our unit implements a learnable nonlinear function with spatial connections. This enables the net to capture much more complex features, thus requiring a significantly smaller number of layers in order to reach the same performance. We illustrate the effectiveness of our units through experiments with state-of-the-art nets for classification, denoising, and super resolution. With our approach, we are able to reduce the size of these models by nearly 50% without incurring any degradation in performance.

*Spotlight presentation at CVPR'18 (+ submitted extension)

A joint work with Tamar Rott Shaham and Tomer Michaeli

WednesdayJan 23, 201915:30
Algebraic Geometry and Representation Theory SeminarRoom A
Speaker:Pavel Etingof Title:Symmetric tensor categories in positive characteristicAbstract:opens in new windowin html    pdfopens in new windowNOTE SPECIAL DAY AND PLACE

I will talk about my joint work with Dave Benson which constructs new symmetric tensor categories in characteristic 2 arising from modular representation theory of elementary abelian 2-groups, and about its conjectural generalization to characteristic p>2. I will also discuss my work with Gelaki and Coulembier which shows that integral symmetric tensor categories in characteristic p>2 whose simple objects form a fusion category are super-Tannakian (i.e., representation categories of a supergroup scheme). 

WednesdayJan 23, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Amir WeissTitle:The Gaussian Maximum Likelihood Approach for Independent Component and Vector AnalysisAbstract:opens in new windowin html    pdfopens in new window

The Blind Source Separation (BSS) problem consists of retrieving signals of interest, termed the sources, from a dataset consisting of their mixtures. One of the most popular and common paradigms for solving the BSS problem is Independent Component Analysis (ICA), where the sources are assumed to be (only) mutually statistically independent random processes, and the mixtures are assumed to be linear combinations thereof, where the linear mixing operator is unknown. In this talk, we shall start with the Gaussian Maximum Likelihood (GML) approach for the semi-blind problem, in which the sources are assumed to be temporally-diverse Gaussian processes. Based on the principles of this approach, we shall then discuss two extensions. First, the noisy Gaussian ICA problem, for which two asymptotically optimal solutions, w.r.t. two different optimality criteria, will be presented. We shall see (both analytically and empirically) that these solutions possess attractive properties even for non-Gaussian mixtures. Then, we shall consider the Independent Vector Analysis (IVA) framework, which has emerged in recent years as an extension of ICA into multiple datasets of mixtures. In IVA, the sources in each set are independent, but may depend on sources in the other sets. We will show that in IVA, the GML approach leads to consistent separation regardless of the sources' distributions.

TuesdayJan 22, 201916:15
Seminar in Geometry and TopologyRoom 1
Speaker:Avner Kiro Title:On Taylor coefficients of smooth functionsAbstract:opens in new windowin html    pdfopens in new window

The talk will be about two classical problems in the theory of Carleman classes of smooth functions. The first one is to describe the image of a Carleman class under the Borel map (which maps a smooth function to its jet of Taylor coefficients at a given point). The second one concerns possible ways to construct a function in a given Carleman class with prescribed Taylor coefficients. I will present solutions to both problems. If time permits, I will also discuss related problems which originate in the singularity theory of Carleman classes.

TuesdayJan 22, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Avner Segal Title:Structure of degenerate principal series of exceptional groupsAbstract:opens in new windowin html    pdfopens in new window

The reducibility and structure of parabolic inductions is a basic problem in the representation theory of p-adic groups.  Of particular interest are principal series and degenerate principal series representations, that is parabolic induction of 1-dimensional representations of Levi subgroups.

In this talk, I will start by describing the functor of normalized induction and its left adjoint, the Jacquet functor, and by going through several examples in the group SL_4(Q_p) will describe an algorithm which can be used to determine reducibility of such representations.

This algorithm is the core of a joint project with Hezi Halawi, in which we study the structure of degenerate principal series of exceptional groups of type En (see https://arxiv.org/abs/1811.02974).

MondayJan 21, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Dana Drachsler-Cohen Title:Practical Reliability of Systems Abstract:opens in new windowin html    pdfopens in new window

Recent years have seen the emergence of new kinds of software including deep learning, programmable computer networks, and blockchains. Unfortunately, these systems have been shown to suffer from critical safety and security errors, affecting their wider adoption. The goal of my research is to develop new automated program verification and synthesis techniques which ensure safety and reliability of these systems.

In this talk, I will start by introducing AI2, the first automated verifier for neural networks able to certify large convolutional models. The key idea behind AI2 is to bridge abstract interpretation and neural networks, enabling a sound over-approximation of a network’s behavior in a scalable manner. I will then briefly discuss DL2, a system which enables clean interaction with deep learning models by allowing users to pose queries in a declarative manner. Finally, I will demonstrate how automated program analysis and synthesis can address key security and reliability challenges in domains such as computer networks and blockchains, preventing severe outages and financial losses.

Bio: Dana Drachsler-Cohen is an ETH Postdoctoral Fellow at the department of Computer Science, ETH Zurich. Her research interests span automated reasoning, program synthesis and machine learning. She obtained her PhD from the Computer Science Department at the Technion in 2017.

ThursdayJan 17, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Eliran Subag Title:Optimization of random polynomials on the sphere in the full-RSB regimeAbstract:opens in new windowin html    pdfopens in new window

To compute the spectral norm of a p-tensor one needs to optimize a homogeneous polynomial of degree p over the sphere. When p=2 (the matrix case) it is algorithmically easy, but for p>2 it can be NP-hard. In this talk I will focus on (randomized) optimization in high-dimensions when the objective function is a linear combination of homogeneous polynomials with Gaussian coefficients. Such random functions are called spherical spin glasses in physics and have been extensively studied since the 80s. I will describe certain geometric properties of spherical spin glasses unique to the full-RSB case, and explain how they can be used to design a polynomial time algorithm that finds points within a small multiplicative error from the global maximum.

ThursdayJan 17, 201912:15
Vision and AIRoom 1
Speaker:Tali Treibitz Title:Variational Plane-Sweeping for Multi-Image Alignment Under Extreme Noise and How is that Related to Underwater Robots?Abstract:opens in new windowin html    pdfopens in new window

We tackle the problem of multiple image alignment and 3D reconstruction under extreme noise.
Modern alignment schemes, based on similarity measures, feature matching and optical flow are often pairwise, or assume global alignment. Nevertheless, under extreme noise, the alignment success sharply deteriorates, since each image does not contain enough information. Yet, when multiple images are well aligned, the signal emerges from the stack.
As the problems of alignment and 3D reconstruction are coupled, we constrain the solution by taking into account only alignments that are geometrically feasible and solve for the entire stack simultaneously. The solution is formulated as a variational problem where only a single variable per pixel, the scene's distance, is solved for. Thus, the complexity of the algorithm is independent of the number of images. Our method outperforms state-of-the-art techniques, as indicated by our simulations and by experiments on real-world scenes.
And, finally- I will discuss how this algorithm is related to our current and planned work with underwater robots.

WednesdayJan 16, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Nir BillfeldTitle: Semiparametric Wavelet-based JPEG IV Estimator for Endogenously Truncated DataAbstract:opens in new windowin html    pdfopens in new window

We show that when data are endogenously truncated the widely-used IV fails to render the relationship causal as well as introduces bias into the exogenous covariates. We offer a newly-introduced semiparametric biorthogonal wavelet-based JPEG IV estimator and its associated symmetry preserving kernel, which is closely related to object recognition methods in Artificial Intelligence. The newly-introduced enriched JPEG algorithm is a denoising tool amenable for identifying redundancies in a sequence of irregular noisy data points which also accommodates a reference-free criterion function for optimal denoising. This is suitable for situations where the original data distribution is unobservable such as in the case of endogenous truncation. This estimator corrects both biases, the one generated by endogenous truncation and the one generated by endogenous covariates, by means of denoising. We introduce a multifocal variant of the local GMM (MFGMM) estimator to establish jointly the entire parameter set asymptotic properties. Using Monte Carlo simulations attest to very high accuracy of our offered semiparametric JPEG IV estimator as well as high efficiency as reflected by root n consistency. These results have emerged from utilizing 2,000,000 different distribution functions, generating 100 million realizations to construct the various data sets.

MondayJan 14, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Chaya Keller Title:Improved lower and upper bounds on the Hadwiger-Debrunner numbersAbstract:opens in new windowin html    pdfopens in new window

A family of sets F is said to satisfy the (p,q)-property if among any p sets in F, some q have a non-empty intersection. Hadwiger and Debrunner (1957) conjectured that for any p > q > d there exists a constant c = c_d(p,q), such that any family of compact convex sets in R^d that satisfies the (p,q)-property, can be pierced by at most c points. The classical Helly's Theorem is equivalent to the fact that c_d(p,p)=1 (p > d).
In a celebrated result from 1992, Alon and Kleitman proved the conjecture. However, obtaining sharp bounds on the minimal such c_d(p,q), called `the Hadwiger-Debrunner numbers', is still a major open problem in combinatorial geometry.
In this talk we present improved upper and lower bounds on the Hadwiger-Debrunner numbers, the latter using the hypergraph container method.

Based on joint works with Shakhar Smorodinsky and Gabor Tardos.

ThursdayJan 10, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Godofredo Iommi and Paul DarioTitle:Double SeminarAbstract:opens in new windowin html    pdfopens in new window

Godofredo Iommi (PUC Chile)

Title: Upper semi-continuity of the entropy map for Markov shifts
Abstract: In this talk I will show that for finite entropy countable Markov shifts the entropy map is upper semi-continuous when restricted to the set of ergodic measures. This is joint work with Mike Todd and Anibal Velozo.

Paul Dario (ENS)

Title: Homogenization on supercritical percolation cluster
Abstract: The standard theory of stochastic homogenization requires an assumption of uniform ellipticity on the environment. In this talk, we investigate how one can remove this assumption in a specific case: the infinite cluster of the supercritical Bernouilli percolation of Zd. We will present a renormalization argument for the infinite cluster and show how one can use it to adapt the theory developped in the uniformly elliptic setting. We will then review some results which can be obtained through this technique: homogenization theorem, large scale regularity, Liouville theorem and bounds on the corrector.

ThursdayJan 10, 201912:15
Vision and AIRoom 1
Speaker:Roey Mechrez Title:Controlling the Latent Space at Inference TimeAbstract:opens in new windowin html    pdfopens in new window

The common practice in image generation is to train a network and fix it to a specific working point. In contrast, in this work we would like to provide control at inference time in a generic way. We propose to add a second training phase, where we train an additional tuning block. The goal of these tuning blocks is to provide us with control over the output image by modifying the latent space representation. I will first present our latest attempt in the domain of generative adversarial, where we aim to improve the quality of the results using the discriminator information -- we name this adversarial feedback loop. Second, I will present Dynamic-Net, where we train a single network such that it can emulate many networks trained with different objectives. The main assumption of both works is that we can learn how to change the latent space in order to achieve a specific goal. Evaluation on many application shows that the latent space can be manipulated, such that, it allows us to have diversified control at inference time.
* Joint work with Firas Shama*, Alon Shoshan* and Lihi Zelnik Manor.

WednesdayJan 09, 201911:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Jonathan BelinkovTitle:Deep Learning Models for Language: What they learn, where they fail, and how to make them more robustAbstract:opens in new windowin html    pdfopens in new window

Deep learning has become pervasive in everyday life, powering language applications like Apple's Siri, Amazon's Alexa, and Google Translate. The inherent limitation of these deep learning systems, however, is that they often function as a "black box," preventing researchers and users from discerning the roles of different components and what they learn during the training process. In this talk, I will describe my research on interpreting deep learning models for language along three lines. First, I will present a methodological framework for investigating how these models capture various language properties. The experimental evaluation will reveal a learned hierarchy of internal representations in deep models for machine translation and speech recognition. Second, I will demonstrate that despite their success, deep models of language fail to deal even with simple kinds of noise, of the type that humans are naturally robust to. I will then propose simple methods for improving their robustness to noise. Finally, I will turn to an intriguing problem in language understanding, where dataset biases enable trivial solutions to complex language tasks. I will show how to design models that are more robust to such biases, and learn less biased latent representations.

TuesdayJan 08, 201916:00
Seminar in Geometry and TopologyRoom 1
Speaker:Chaim Even Zohar Title:Random KnotsAbstract:opens in new windowin html    pdfopens in new window

Random curves in space and how they are knotted give an insight into the behavior of "typical" knots and links. They have been studied by biologists and physicists in the context of the structure of random polymers. Several randomized models have been suggested and investigated, and many results have been obtained via computational experiment. The talk will begin with a review of this area.

In work with Hass, Linial, and Nowik, we study knots based on petal projections, potholder diagrams, and more. We have found explicit formulas for the limit distribution of finite type invariants of random knots and links in the Petaluma model. I will discuss these results and sketch proof ideas as time permits.

TuesdayJan 08, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Kevin Coulembier Title:Tensor categories in positive characteristicAbstract:opens in new windowin html    pdfopens in new window

Tensor categories are abelian k-linear monoidal categories modelled on the representation categories of affine (super)group schemes over k. Deligne gave very succinct intrinsic criteria for a tensor category to be equivalent to such a representation category, over fields k of characteristic zero. These descriptions are known to fail badly in prime characteristics. In this talk, I will present analogues in prime characteristic of these intrinsic criteria. Time permitting, I will comment on the link with a recent conjecture of V. Ostrik which aims to extend Deligne's work in a different direction.

TuesdayJan 08, 201911:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Harry Dankowicz Title:Emergent Task Differentiation on Network FiltersAbstract:opens in new windowin html    pdfopens in new window

Inspired by empirical observations on honey bee colonies, we analyze the emergence of task differentiation in a model complex system, characterized by an absence of hierarchical control, yet able to exhibit coordinated behavior and collective function. The analysis considers the steady-state response of a mechanical interaction network to exogenous resonant excitation. It demonstrates how an optimal allocation of excitation sensitivities across the network nodes that results in a maximal response may be constructed either using global information about the network topology or, alternatively, through iterated dynamics of an intuitive learning paradigm that only relies on local information within the network. Importantly, the analysis derives explicit conditions on the topology that guarantee optimal selection using only local dynamics, but also describes circumstances when the system may naturally evolve to a condition of collapse, in which all the excitation sensitivities equal zero, at least over intermediate times. The discussion considers the implications to other network designs, including naturally occurring networks, as well as the effects of noise and nonlinearity.

MondayJan 07, 201914:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Julia Chuzhoy Title:Almost Polynomial Hardness of Node-Disjoint Paths in GridsAbstract:opens in new windowin html    pdfopens in new window

In the classical Node-Disjoint Paths (NDP) problem, we are given an n-vertex graph G, and a collection of pairs of its vertices, called demand pairs. The goal is to route as many of the demand pairs as possible, where to route a pair we need to select a path connecting it, so that all selected paths are disjoint in their vertices.

The best current algorithm for NDP achieves an $O(\sqrt{n})$-approximation, while, until recently, the best negative result was a roughly $\Omega(\sqrt{\log n})$-hardness of approximation. Recently, an improved $2^{\Omega(\sqrt{\log n})}$-hardness of approximation for NDP was shown, even if the underlying graph is a subgraph of a grid graph, and all source vertices lie on the boundary of the grid. Unfortunately, this result does not extend to grid graphs.

The approximability of NDP in grids has remained a tantalizing open question, with the best upper bound of $\tilde{O}(n^{1/4})$, and the best lower bound of APX-hardness. In this talk we come close to resolving this question, by showing an almost polynomial hardness of approximation for NDP in grid graphs.

Our hardness proof performs a reduction from the 3COL(5) problem to NDP, using a new graph partitioning problem as a proxy. Unlike the more standard approach of employing Karp reductions to prove hardness of approximation, our proof is a Cook-type reduction, where, given an input instance of 3COL(5), we produce a large number of instances of NDP, and apply an approximation algorithm for NDP to each of them. The construction of each new instance of NDP crucially depends on the solutions to the previous instances that were found by the approximation algorithm.

 

Joint work with David H.K. Kim and Rachit Nimavat.

FridayJan 04, 201911:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Vera Serganova Title:Support varieties for supergroupsAbstract:opens in new windowin html    pdfopens in new windowNOTE SPECIAL DAY

We define a functor from the category of representations of algebraic supergroups with reductive even part to the category of equivariant sheaves and show several applications of this construction to representation theory, in particular projectivity criterion, classification of blocks and computation of dimensions of irreducible representations.

ThursdayJan 03, 201913:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Chaim Even-Zohar Title:Patterns in Random PermutationsAbstract:opens in new windowin html    pdfopens in new window

Every $k$ entries in a permutation can have one of k! different relative orders, called patterns. How many times does each pattern occur in a large random permutation of size $n$? The distribution of this k!-dimensional vector of pattern densities was studied by Janson, Nakamura, and Zeilberger (2015). Their analysis showed that some component of this vector is asymptotically multinormal of order $1/\sqrt(n)$, while the orthogonal component is smaller. Using representations of the symmetric group, and the theory of U-statistics, we refine the analysis of this distribution. We show that it decomposes into $k$ asymptotically uncorrelated components of different orders in $n$, that correspond to representations of $S_k$. Some combinations of pattern densities that arise in this decomposition have interpretations as practical nonparametric statistical tests.

ThursdayJan 03, 201912:15
Vision and AIRoom 1
Speaker:Ofir LindenbaumTitle:Geometry Based Data GenerationAbstract:opens in new windowin html    pdfopens in new windowJOINT MACHINE LEARNING & VISION SEMINAR

Imbalanced data is problematic for many machine learning applications. Some classifiers, for example, can be strongly biased due to diversity of class populations. In unsupervised learning, imbalanced sampling of ground truth clusters can lead to distortions of the learned clusters. Removing points (undersampling) is a simple solution, but this strategy leads to information loss and can decrease generalization performance. We instead focus on data generation to artificially balance the data.
In this talk, we present a novel data synthesis method, which we call SUGAR (Synthesis Using Geometrically Aligned Random-walks) [1], for generating data in its original feature space while following its intrinsic geometry. This geometry is inferred by a diffusion kernel [2] that captures a data-driven manifold and reveals underlying structure in the full range of the data space -- including undersampled regions that can be augmented by new synthesized data.
We demonstrate the potential advantages of the approach by improving both classification and clustering performance on numerous datasets. Finally, we show that this approach is especially useful in biology, where rare subpopulations and gene-interaction relationships are affected by biased sampling.

[1] Lindenbaum, Ofir, Jay S. Stanley III, Guy Wolf, and Smita Krishnaswamy. "Geometry-Based Data Generation." NIPS (2018).

[2] Coifman, Ronald R., and Stéphane Lafon. "Diffusion maps." Applied and computational harmonic analysis (2006).

TuesdayJan 01, 201916:00
Seminar in Geometry and TopologyRoom 1
Speaker:Or Hershkovits Title:Mean Curvature Flow of SurfacesAbstract:opens in new windowin html    pdfopens in new window

 In the last 35 years, geometric flows have proven to be a powerful tool in geometry and topology. The Mean Curvature Flow is, in many ways, the most natural flow for surfaces in Euclidean space.
In this talk, which will assume no prior knowledge,  I will illustrate how mean curvature flow could be used to address geometric questions. I will then explain why the formation of singularities of the mean curvature flow poses difficulties for such applications, and how recent new discoveries about the structure of singularities (including a work joint with Kyeongsu Choi and Robert Haslhofer) may help overcome those difficulties.

TuesdayJan 01, 201911:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Dmitry Batenkov Title:Stability of some super-resolution problemsAbstract:opens in new windowin html    pdfopens in new windowJOINT WITH MACHINE LEARNING AND STATISTICS SEMINAR

The problem of computational super-resolution asks to recover fine features of a signal from inaccurate and bandlimited data, using an a-priori model as a regularization. I will describe several situations for which sharp bounds for stable reconstruction are known, depending on signal complexity, noise/uncertainty level, and available data bandwidth. I will also discuss optimal recovery algorithms, and some open questions.

MondayDec 31, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Omri Weinstein Title:Static Data Structure Lower Bounds Imply RigidityAbstract:opens in new windowin html    pdfopens in new window

 We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t >> log^2(n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s = (1+eps)n), would already imply a semi-explicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy, and Yekhanin, 2009). Our result further asserts that polynomial (t > n^eps) data structure lower bounds against near-maximal space, would imply super-linear circuit lower bounds for log-depth linear circuits (a four-decade open question). In the succinct space regime (s = n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak, 2006), and on a new worst-to-average case reduction for rigidity, which is of independent interest.
Joint work with Zeev Dvir (Princeton) and Alexander Golovnev (Harvard).

ThursdayDec 27, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Yury Makarychev Title:Performance of Johnson-Lindenstrauss Transform for k-Means and k-Medians ClusteringAbstract:opens in new windowin html    pdfopens in new window

Consider an instance of Euclidean k-means or k-medians clustering. We show that the cost of the optimal solution is preserved up to a factor of $(1+\epsilon)$ under a projection onto a random $O(log(k/\epsilon)/\epsilon^2)$-dimensional subspace whp. Further, the cost of every clustering is preserved within $(1+\epsilon)$. Crucially, the dimension does not depend on the total number of points n in the instance. Additionally, our result applies to Euclidean k-clustering with the distances raised to the p-th power for any constant $p$.

For k-means, our result resolves an open problem posed by Cohen, Elder, Musco, Musco, and Persu (STOC 2015); for k-medians, it answers a question raised by Kannan.

Joint work with Konstantin Makarychev and Ilya Razenshteyn.

ThursdayDec 27, 201812:15
Vision and AIRoom 1
Speaker:Greg Shakhnarovich Title:Style Transfer by Relaxed Optimal Transport and Self-SimilarityAbstract:opens in new windowin html    pdfopens in new window

The goal of style transfer algorithms is to render the content of one image using the style of another. We propose Style Transfer by Relaxed Optimal Transport and Self-Similarity (STROTSS), a new optimization-based style transfer algorithm. We extend our method to allow user-specified point-to-point or region-to-region control over visual similarity between the style image and the output. Such guidance can be used to either achieve a particular visual effect or correct errors made by unconstrained style transfer. In order to quantitatively compare our method to prior work, we conduct a large-scale user study designed to assess the style-content tradeoff across settings in style transfer algorithms. Our results indicate that for any desired level of content preservation, our method provides higher quality stylization than prior work.
Joint work with Nick Kolkin and Jason Salavon.

WednesdayDec 26, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Nadav CohenTitle:On Optimization and Expressiveness in Deep LearningAbstract:opens in new windowin html    pdfopens in new window

Understanding deep learning calls for addressing three fundamental questions: expressiveness, optimization and generalization. Expressiveness refers to the ability of compactly sized deep neural networks to represent functions capable of solving real-world problems. Optimization concerns the effectiveness of simple gradient-based algorithms in solving non-convex neural network training programs. Generalization treats the phenomenon of deep learning models not overfitting despite having much more parameters than examples to learn from. This talk will describe a series of works aimed at unraveling some of the mysteries behind optimization and expressiveness. I will begin by discussing recent analyses of optimization for deep linear neural networks. By studying the trajectories of gradient descent, we will derive the most general guarantee to date for efficient convergence to global minimum of a gradient-based algorithm training a deep network. Moreover, in stark contrast to conventional wisdom, we will see that, sometimes, gradient descent can train a deep linear network faster than a classic linear model. In other words, depth can accelerate optimization, even without any gain in expressiveness, and despite introducing non-convexity to a formerly convex problem. In the second (shorter) part of the talk, I will present an equivalence between convolutional and recurrent networks --- the most successful deep learning architectures to date --- and hierarchical tensor decompositions. The equivalence brings forth answers to various questions concerning expressiveness, resulting in new theoretically-backed tools for deep network design.
Optimization works covered in this talk were in collaboration with Sanjeev Arora, Elad Hazan, Noah Golowich and Wei Hu. Expressiveness works were with Amnon Shashua, Or Sharir, Yoav Levine, Ronen Tamari and David Yakira.

TuesdayDec 25, 201816:00
Seminar in Geometry and TopologyRoom 1
Speaker:Oleg Ivrii Title:Describing Blaschke products by their critical pointsAbstract:opens in new windowin html    pdfopens in new window

In this talk, I will discuss a question which originates in complex analysis but is really a problem in non-linear elliptic PDE. A finite Blaschke product is a proper holomorphic self-map of the unit disk, just like a polynomial is a proper holomorphic self-map of the complex plane. A celebrated theorem of Heins says that up to post-composition with a M\"obius transformation, a finite Blaschke product is uniquely determined by the set of its critical points. Konstantin Dyakonov suggested that it may interesting to extend this result to infinite degree. However, one must be a little careful since infinite Blaschke products may have identical critical sets. I will show that an infinite Blaschke product is uniquely determined by its "critical structure" and describe all possible critical structures which can occur. By Liouville's correspondence, this question is equivalent to studying nearly-maximal solutions of the Gauss curvature equation $\Delta u = e^{2u}$. This problem can then be solved using PDE techniques, using the method of sub- and super-solutions.

TuesdayDec 25, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Eyal SubagTitle:Symmetries of the hydrogen atom and algebraic familiesAbstract:opens in new windowin html    pdfopens in new window

The hydrogen atom system is one of the most thoroughly studied examples of a quantum mechanical system. It can be fully solved, and the main reason why is its (hidden) symmetry. In this talk I shall explain how the symmetries of the Schrödinger equation for the hydrogen atom, both visible and hidden, give rise to an example in the recently developed theory of algebraic families of Harish-Chandra modules. I will show how the algebraic structure of these symmetries completely determines the spectrum of the Schrödinger operator and sheds new light on the quantum nature of the system. No prior knowledge on quantum mechanics will be assumed.

MondayDec 24, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Liron Cohen Title:Towards the Next Generation of Proof Assistants: Enhancing the Proofs–as–Programs ParadigmAbstract:opens in new windowin html    pdfopens in new window

As software has grown increasingly critical to our society's infrastructure, mechanically-verified software has grown increasingly important, feasible, and prevalent. Proof assistants have seen tremendous growth in recent years because of their success in the mechanical verification of high-value applications in many areas, including cyber security, cyber-physical systems, operating systems, compilers, and microkernels. These proof assistants are built on top of constructive type theory whose computational interpretation is given by the proofs-as-programs paradigm, which establishes a correspondence between formal proofs and computer programs. However, while both proof theory and programming languages have evolved significantly over the past years, the cross-fertilization of the independent new developments in each of these fields has yet to be explored in the context of this paradigm. This naturally gives rise to the following questions: how can modern notions of computation influence and contribute to formal foundations, and how can modern reasoning techniques improve the way we design and reason about programs?

In this talk I first demonstrate how using programming principles that go beyond the standard lambda-calculus, namely state and non-determinism, promotes the specification and verification of modern systems, e.g. distributed systems. I then illustrate the surprising fragility of proof assistants in the presence of such new computational capabilities, and outline my ongoing efforts to develop a more robust foundation. For the converse direction, I show how incorporating modern proof-theoretic techniques offers a more congenial framework for reasoning about hard programming problems and hence facilitates the verification effort.

MondayDec 24, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 141
Speaker:Alexander Yom-DinTitle:From analysis to algebra to geometry - an example in representation theory of real groupsAbstract:opens in new windowin html    pdfopens in new windowNOTE SPECIAL DAY TIME AND ROOM

Representation theory of non-compact real groups, such as SL(2,R), is a fundamental discipline with uses in harmonic analysis, number theory, physics, and more. This theory is analytical in nature, but in the course of the 20th century it was algebraized and geometrized (the key contributions are by Harish-Chandra for the former and by Beilinson-Bernstein for the latter). Roughly and generally speaking, algebraization strips layers from the objects of study until we are left with a bare skeleton, amenable to symbolic manipulation. Geometrization, again very roughly, reveals how algebraic objects have secret lives over spaces - thus more amenable to human intuition. In this talk, I will try to motivate and present one example - the calculation of the Casselman-Jacquet module of a principal series representation (I will explain the terms in the talk).

ThursdayDec 20, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Max Fathi Title:Stability in the Bakry-Emery theoremAbstract:opens in new windowin html    pdfopens in new window

The Bakry-Emery theorem asserts that uniformly log-concave probability measures satisfy certain functional inequalities, with constants that are better than those associated with the Gaussian measure. In this talk, I will explain how if the constant is almost that of the Gaussian, then the measure almost splits off a Gaussian factor, with explicit quantitative bounds. The proof is based on a combination of Stein's method and simple arguments from calculus of variations. Joint work with Thomas Courtade.

ThursdayDec 20, 201812:15
Vision and AIRoom 1
Speaker:Yedid HoshenTitle:Non-Adversarial Machine Learning for Unsupervised Translation across Languages and Images Abstract:opens in new windowin html    pdfopens in new window

This talk will describe my past and ongoing work on translating images and words between very different datasets without supervision. Humans often do not require supervision to make connections between very different sources of data, but this is still difficult for machines. Recently great progress was made by using adversarial training a powerful yet tricky method. Although adversarial methods have had great success, they have critical failings which significantly limit their breadth of applicability and motivate research into alternative non-adversarial methods. In this talk, I will describe novel non-adversarial methods for unsupervised word translation and for translating images between very different datasets (joint work with Lior Wolf). As image generation models are an important component in our method, I will present a non-adversarial image generation approach, which is often better than current adversarial approaches (joint work with Jitendra Malik).

TuesdayDec 18, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Lenny Makar-LimanovTitle:On algebraically closed skew fields (i.e. Makar-Limanov's bodies).Abstract:opens in new windowin html    pdfopens in new window

We all know what is an algebraically closed field and that any field can be embedded into an algebraically closed field. But what happens if multiplication is not commutative? In my talk I'll suggest a definition of an algebraically closed skew field, give an example of such a skew field, and show that not every skew field can be embedded into an algebraically closed one.
It is still unknown whether an algebraically closed skew field exists in the finite characteristic case!

ThursdayDec 13, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Manuel Stadlbauer Title:Exponential decay of quotients of Ruelle operatorsAbstract:opens in new windowin html    pdfopens in new window

Ruelle's operator theorem states that the Ruelle operator $L$, which is a positive operator acting on Holder functions, is conjugated to $P+R$ where $R$ is a one-dimensional projection and the norm of $R$ is smaller than 1. This decomposition, also known as spectral gap, is of interest as it allows to characterise the underlying dynamical system through, e.g., central limit theorems or continuous response to perturbations. However, the conjugation depends on the existence of a positive eigenfunction of $L$, which might not exist in more general, fibred situations due to purely functorial reasons. A possibility to circumvent this problem is to consider quotients of operators of the form $f \mapsto \frac{L^m(f L^n (1))}{L^{m+n}(1)}.$ In fact, it is possible to provide reasonable conditions such that their dual operators contract the Wasserstein distance exponentially in $m$. The result gives rise, for example, to a law of the iterated logarithm for continued fractions with sequentially restricted entries or a topology on the set of equilibrium states for semigroups of expanding maps. This is joint work with Paulo Varandas and Xuan Zhang.

ThursdayDec 13, 201812:15
Vision and AIRoom 1
Speaker:Tali Dekel Title:Re-rendering RealityAbstract:opens in new windowin html    pdfopens in new window

We all capture the world around us through digital data such as images, videos and sound. However, in many cases, we are interested in certain properties of the data that are either not available or difficult to perceive directly from the input signal. My goal is to "Re-render Reality", i.e., develop algorithms that analyze digital signals and then create a new version of it that allows us to see and hear better. In this talk, I'll present a variety of methodologies aimed at enhancing the way we perceive our world through modified, re-rendered output. These works combine ideas from signal processing, optimization, computer graphics, and machine learning, and address a wide range of applications. More specifically, I'll demonstrate how we can automatically reveal subtle geometric imperfection in images, visualize human motion in 3D, and use visual signals to help us separate and mute interference sound in a video. Finally, I'll discuss some of my future directions and work in progress.

BIO: Tali is a Senior Research Scientist at Google, Cambridge, developing algorithms at the intersection of computer vision and computer graphics. Before Google, she was a Postdoctoral Associate at the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT, working with Prof. William T. Freeman. Tali completed her Ph.D studies at the school of electrical engineering, Tel-Aviv University, Israel, under the supervision of Prof. Shai Avidan, and Prof. Yael Moses. Her research interests include computational photography, image synthesize, geometry and 3D reconstruction.

ThursdayDec 13, 201811:30
Computer Science Seminar
Speaker: Sampath KannanTitle: Fairness in Algorithmic Decision MakingAbstract:opens in new windowin html    pdfopens in new windowWOLFSON AUDITORIUM

In this talk we survey some formulations of fairness requirements for decision making under uncertainty. We then discuss results from 3 recent papers:

  1. Treating individuals fairly is not in conflict with long-term scientific learning goals if the population is sufficiently diverse.
  2. When there is a pipeline of decisions, end-to-end fairness is impossible to achieve even in a very simple model.
  3. Exploiting the knowledge acquired by others can unfairly advantage the free rider.

These papers are joint work with a number of co-authors:

Christopher Jung, Neil Lutz, Jamie Morgenstern, Aaron Roth, Bo Waggoner, Steven Wu, and Juba Ziani

WednesdayDec 12, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Roy SchwartzTitle:Towards Interpretable Deep Learning for Natural Language ProcessingAbstract:opens in new windowin html    pdfopens in new window

Despite their superb empirical performance, deep learning models for natural language processing (NLP) are often considered black boxes, as relatively little is known as to what accounts for their success. This lack of understanding turns model development into a slow and expensive trial-and-error process, which limits many researchers from developing state-of-the-art models. Customers of deep learning also suffer from this lack of understanding, because they are using tools that they cannot interpret. In this talk I will show that many deep learning models are much more understandable than originally thought. I will present links between several deep learning models and classical NLP models: weighted finite-state automata. As the theory behind the latter is well studied, these findings allow for the development of more interpretable and better-performing NLP models. As a case study, I will focus on convolutional neural networks (ConvNets), one of the most widely used deep models in NLP. I will show that ConvNets are mathematically equivalent to a simple, linear chain weighted finite-state automaton. By uncovering this link, I will present an extension of ConvNets that is both more robust and more interpretable than the original model. I will then present similar observations regarding six recently introduced recurrent neural network (RNN) models, demonstrating the empirical benefits of these findings to the performance of NLP systems.

This is joint work with Hao Peng, Sam Thomson and Noah A. Smith

TuesdayDec 11, 201816:00
Seminar in Geometry and TopologyRoom 1
Speaker:Karim RakhimovTitle:Dynamical properties of geodesics of meromorphic connections on Riemann surfacesAbstract:opens in new windowin html    pdfopens in new window
MondayDec 10, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Nir BitanskyTitle:Weak Zero Knowledge Beyond the Black Box BarrierAbstract:opens in new windowin html    pdfopens in new window

Zero knowledge protocols are spectacular, allowing to prove NP statements without revealing anything but their validity. An essential element that enables this wonder is interaction. But how much interaction exactly is needed? This question has long challenged cryptographers and is yet to be settled under standard assumptions. In fact, the question appears to be equally challenging also for natural relaxations of the zero knowledge requirement. The difficulty in answering the round complexity question stems from that of a foundational question in cryptography --- what is the power of non-black-box reductions?

In this talk, I will explain this difficulty and present a new non-black-box technique that resolves, under standard assumptions, the round complexity of weak zero knowledge protocols (Dwork-Naor-Reingold-Stockmeyer '98). Specifically, assuming quasipolynomial hardness of the Learning with Errors problem and fully-homomorphic encryption, we construct a two message protocol, a setting where (full-fledged) zero knowledge is impossible.

The talk will assume no prior knowledge in cryptography. It is based on joint work with Dakshita Khurana and Omer Paneth (the paper can be found on

https://eprint.iacr.org/2018/895.pdf).

ThursdayDec 06, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Boaz SlomkaTitle:Improved bounds for Hadwiger’s covering problem via thin-shell estimatesAbstract:opens in new windowin html    pdfopens in new window

A long-standing open problem, known as Hadwiger's covering problem, asks what is the smallest natural number $N(n)$ such that every convex body in {\mathbb R}^n can be covered by a union of the interiors of at most $N(n)$ of its translates. Despite continuous efforts, the best general upper bound known for this number remains as it was more than sixty years ago, of the order of ${2n \choose n} \ln n$.

In this talk, I will discuss some history of this problem and present a new result in which we improve this bound by a sub-exponential factor. Our approach combines ideas from previous work, with tools from Asymptotic Geometric Analysis. As a key step, we use thin-shell estimates for isotropic log-concave measures to prove a new lower bound for the maximum volume of the intersection of a convex body $K$ with a translate of $-K$. We further show that the same bound holds for the volume of $K\cap(-K)$ if the center of mass of $K$ is at the origin.

If time permits we shall discuss some other methods and results concerning this problem and its relatives.

Joint work with H. Huang, B. Vritsiou, and T. Tkocz

ThursdayDec 06, 201812:15
Vision and AIRoom 1
Speaker:Yuval BahatTitle:Exploiting Deviations from Ideal Visual RecurrenceAbstract:opens in new windowin html    pdfopens in new window

Visual repetitions are abundant in our surrounding physical world: small image patches tend to reoccur within a natural image, and across different rescaled versions thereof. Similarly, semantic repetitions appear naturally inside an object class within image datasets, as a result of different views and scales of the same object. In my thesis I studied deviations from these expected repetitions, and demonstrated how this information can be exploited to tackle both low-level and high-level vision tasks. These include “blind” image reconstruction tasks (e.g. dehazing, deblurring), image classification confidence estimation, and more.

TuesdayDec 04, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Shai ShechterTitle:Approximating the Representation Zeta Functions of Finite Groups of Lie-TypeAbstract:opens in new windowin html    pdfopens in new window
ThursdayNov 29, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Amir Dembo Title:Large deviations of subgraph counts for sparse random graphsAbstract:opens in new windowin html    pdfopens in new window

For fixed t>1 and L>3 we establish sharp asymptotic formula for the log-probability that the number of cycles of length L in the Erdos - Renyi random graph G(N,p) exceeds its expectation by a factor t,  assuming only that p >> log N/sqrt(N). We obtain such sharp upper tail  bounds also for the Schatten norms of the corresponding adjacency matrices, and in a narrower range of p=p(N), also for general subgraph counts. In this talk, based on a recent joint work with Nick Cook, I will explain our approach and in particular our quantitative refinement of Szemeredi's regularity lemma for sparse random graphs in the large deviations regime.

ThursdayNov 29, 201812:15
Vision and AIRoom 155
Speaker:Yair Weiss Title:Why do deep convolutional networks generalize so poorly to small image transformations?Abstract:opens in new windowin html    pdfopens in new window

Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore,  we see these failures to generalize more frequently in more modern networks. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans.
Joint work with Aharon Azulay

ThursdayNov 29, 201811:30
Computer Science Seminar
Speaker: Noam NisanTitle:The Communication Complexity of Cake CuttingAbstract:opens in new windowin html    pdfopens in new windowWOLFSON AUDITORIUM

This talk concerns the well-studied model of "cake-cutting" for studying questions regarding notions of fair division of resources. We focus on discrete versions rather than using infinite-precision real values, specifically, focusing on the communication complexity. Using general discrete simulations of classical infinite-precision protocols (Robertson-Webb and moving-knife), we roughly partition the various fair-allocation problems into 3 classes: "easy" (constant number of rounds of logarithmic many bits), "medium" (poly-log total communication), and "hard".

Our main technical result concerns two of the "medium" problems (perfect allocation for 2 players and equitable allocation for any number of players) which we prove are not in the "easy" class. Our main open problem is to separate the "hard" from the "medium" classes.

Joint work with Simina Branzei

WednesdayNov 28, 201814:00
Foundations of Computer Science ColloquiumRoom 208
Speaker:Nicolas ReschTitle:Lossless dimension expanders via linearized polynomials and subspace designs Abstract:opens in new windowin html    pdfopens in new windowNote special time and place

For a vector space F^n over a field F, an (η, ß)-dimension expander of degree d is a collection of d linear maps Γ_j : F^n \to F^n such that for every subspace U of F^n of dimension at most ηn, the image of U under all the maps, ∑_{j=1}^d Γ_j(U), has dimension at least ßdim(U). Over a finite field, a random collection of d=O(1) maps Γ_j over excellent “lossless” expansion with high probability: ß ≈ d for η ≥ Ω(1/\eta). When it comes to a family of explicit constructions (for growing n), however, achieving even expansion factor β = 1 + ε with constant degree is a non-trivial goal. We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list decoding in the rank-metric. Our approach yields the following:

  • Lossless expansion over large fields; more precisely ß ≥ (1–ε)d and η ≥ (1–ε)/d with d=O_ε(1), when |F| ≥ Ω(n).
  • Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely ß ≥ Ω(δd) and η ≥ Ω(1/(δd)) with d = O_δ(1), when |F| ≥ n^δ.

Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Ω(1), 1+Ω(1))-dimension expanders of constant degree over all fields. An approach based on “rank condensing via subspace designs” led to dimension expanders with ß ≥ Ω(√d) over large finite fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree. Based on joint work with Venkatesan Guruswami and Chaoping Xing.

WednesdayNov 28, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Michael KimTitle:Fairness through Computationally-Bounded AwarenessAbstract:opens in new windowin html    pdfopens in new window

As algorithmic prediction systems have become more widespread, so too have concerns that these systems may be discriminatory against groups of people protected by laws and ethics. We present a recent line of work that takes a complexity theoretic perspective towards combating discrimination in prediction systems. We'll focus on fair classification within the versatile framework of Dwork et al. [ITCS'12], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike earlier work, we do not assume that the entire metric is known to the learning algorithm; instead, the learner can query this metric a bounded number of times. We propose a new notion of fairness called *metric multifairness* and show how to achieve this notion in our setting. Metric multifairness is parameterized by a similarity metric d on pairs of individuals to classify and a rich collection C of (possibly overlapping) "comparison sets" over pairs of individuals. At a high level, metric multifairness guarantees that *similar subpopulations are treated similarly*, as long as these subpopulations are identified within the class C.

MondayNov 26, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Dana RonTitle:Title: 2,3,…,k: From approximating the number of edges to approximating the number of k-cliques (with a sublinear number of queries)Abstract:opens in new windowin html    pdfopens in new window

In this talk I will present an algorithms for approximating the number of k-cliques in a graph when given query access to the graph. This problem was previously studied for the cases of k=2 (edges) and k=3 (triangles). We give an algorithm that works for any k >= 3, and is actually conceptually simpler than the k=3 algorithm. We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries. Let n denote the number of vertices in the graph, m the number of edges, and C_k the number of k-cliques. We design an algorithm that outputs a (1+\epsilon)-approximation (with high probability) for C_k, whose expected query complexity and running time are O (\frac{n}{C_k^{1/k}}+\frac{m^{k/2}}{C_k}) poly (\log n, 1/\epsilon,k).

Hence, the complexity of the algorithm is sublinear in the size of the graph for C_k = \omega(m^{k/2-1}). Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (up to the dependence on \log n, 1/\epsilon and k).

This is joint work with Talya Eden and C. Seshadhri.

ThursdayNov 22, 201812:15
Vision and AIRoom 1
Speaker:Yehuda Dar Title:System-Aware Compression: Optimizing Imaging Systems from the Compression StandpointAbstract:opens in new windowin html    pdfopens in new window

In typical imaging systems, an image/video is first acquired, then compressed for transmission or storage, and eventually presented to human observers using different and often imperfect display devices. While the resulting quality of the perceived output image may severely be affected by the acquisition and display processes, these degradations are usually ignored in the compression stage, leading to an overall sub-optimal system performance. In this work we propose a compression methodology to optimize the system's end-to-end reconstruction error with respect to the compression bit-cost. Using the alternating direction method of multipliers (ADMM) technique, we show that the design of the new globally-optimized compression reduces to a standard compression of a "system adjusted" signal. Essentially, we propose a new practical framework for the information-theoretic problem of remote source coding. The main ideas of our method are further explained using rate-distortion theory for Gaussian signals. We experimentally demonstrate our framework for image and video compression using the state-of-the-art HEVC standard, adjusted to several system layouts including acquisition and rendering models. The experiments established our method as the best approach for optimizing the system performance at high bit-rates from the compression standpoint.
In addition, we relate the proposed approach also to signal restoration using complexity regularization, where the likelihood of candidate solutions is evaluated based on their compression bit-costs.
Using our ADMM-based approach, we present new restoration methods relying on repeated applications of standard compression techniques. Thus, we restore signals by leveraging state-of-the-art models designed for compression. The presented experiments show good results for image deblurring and inpainting using the JPEG2000 and HEVC compression standards.
* Joint work with Prof. Alfred Bruckstein and Prof. Michael Elad.
** More details about the speaker and his research work are available at  http://ydar.cswp.cs.technion.ac.il/

 

 

ThursdayNov 22, 201811:30
Computer Science Seminar
Speaker:Amos FiatTitle:C-single crossing Interdependent valuationsAbstract:opens in new windowin html    pdfopens in new windowWOLFSON AUDITORIUM

We consider the goal of social welfare maximization where a single item is to be assigned to one of to n potential agents with interdependent values.

That is, each agent has her own private signal, and the valuation of each agent is a known function of all n private signals. This captures settings such as valuations for artwork, oil drilling rights, broadcast rights, and many more. In the interdependent value setting, all previous work has assumed a so-called single-crossing condition. Single-crossing means that the impact of agent i’s private signal, s_i, on her own valuation is greater than the impact of s_ii on the valuation of any other agent. It is known that without the single-crossing condition an efficient outcome cannot be obtained. We study welfare maximization for interdependent valuations through the lens of approximation.

We show that, in general, without the single-crossing condition, one cannot hope to approximate the optimal social welfare any better than the approximation given by assigning the item to a random bidder. Consequently, we introduce a relaxed version of single-crossing, c-single-crossing, parameterized by c ≥ 1, which means that the impact of s_i on the valuation of agent i is at least 1/c times the impact of s_i on the valuation of any other agent (c = 1 is single-crossing). Using this parameterized notion, we obtain a host of positive results. We also consider interdependent settings when valuations are concave and give improved results.

Joint work with Alon Eden, Michal Feldman, and Kira Goldner.

MondayNov 19, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Natan RubinTitle:Hitting Convex Sets with PointsAbstract:opens in new windowin html    pdfopens in new window

Given an underlying finite point set P in the plane, we seek a small set Q that would hit any convex set that contains at least an Epsilon-fraction of P. Such a set Q is called a weak Epsilon-net. The study of Epsilon-nets is central to Computational and Combinatorial Geometry, and it bears important connections to Statistical Learning Theory, Extremal Combinatorics, Discrete Optimization, and other areas.

It is an outstanding open problem to determine tight asymptotic bounds on weak Epsilon-nets with respect to convex sets. For any underlying point set in the plane we describe such a net whose cardinality is roughly proportional to Epsilon^{-3/2}. This is the first improvement of the over-25-year-old bound of Alon, Barany, Furedi, and Kleitman.

WednesdayNov 14, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Kfir LevyTitle:Beyond SGD: Data Adaptive Methods for Machine Learning Abstract:opens in new windowin html    pdfopens in new window

The tremendous success of the Machine Learning paradigm heavily relies on the development of powerful optimization methods. The canonical algorithm for training learning models is SGD (Stochastic Gradient Descent), yet this method has its limitations. It is often unable to exploit useful statistical/geometric structure, it might degrade upon encountering prevalent non-convex phenomena, and it is hard to parallelize. In this talk I will discuss an ongoing line of research where we develop alternative methods that resolve some of SGD"s limitations. The methods that I describe are as efficient as SGD, and implicitly adapt to the underlying structure of the problem in a data dependent manner.

In the first part of the talk, I will discuss a method that is able to take advantage of hard/easy training samples. In the second part, I will discuss a method that enables an efficient parallelization of SGD. Finally, I will briefly describe a method that implicitly adapts to the smoothness and noise properties of the learning objective.

MondayNov 12, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Michal Dory Title:Distributed Approximation of k-edge-connected SubgraphsAbstract:opens in new windowin html    pdfopens in new window

In the minimum k-edge-connected spanning subgraph (k-ECSS) problem the goal is to find the minimum weight subgraph resistant to up to k-1 edge failures. This is a central problem in network design, and a natural generalization of the minimum spanning tree (MST) problem. In this talk, I will present fast randomized distributed approximation algorithms for k-ECSS in the CONGEST model.

ThursdayNov 08, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Snir Ben-Ovadia Title:What are SRB and GSRB measures, and a new characterisation of their existence.Abstract:opens in new windowin html    pdfopens in new window

SRB measures are an important object in dynamical systems and mathematical physics. Named after Sinai , Ruelle and Bowen, these measures have important properties of being equilibrium states which describe chaotic behaviour, yet may also describe the asymptotic of ``observable” events in the phase space. An open and important question, is in what generality do systems admit SRB measures?

We present the notion of generalised SRB measures (GSRB in short), mention some of their important properties, and present a new condition to characterise their existence on a general setup.

The first part of the talk will describe some of the motivation leading to define and to study SRB measures; and so we will define GSRB measures and compare their properties with the properties sought for SRB measures. We will also describe a case study of examples motivating to study GSRB measures. Our new result is a characterisation of systems admitting GSRB measures.

In the second part of the talk, as much as time permits, we will present some key steps in the construction of GSRB measures.

ThursdayNov 08, 201811:30
Computer Science Seminar
Speaker: Shai Shalev-ShwartzTitle:Formalizing the “Duty of Care” for self-driving carsAbstract:opens in new windowin html    pdfopens in new windowWOLFSON AUDITORIUM

The "Duty of Care"is a legal obligation requiring one to adhere to a standard of reasonable care while performing acts that could harm others. We propose a formal, mathematical interpretation of the duty of care in the context of "being cautious while driving". The framework enables tackling moral questions in a formal way and is an example of a new research field which we call regulatory science.

Joint work with Shaked Shammah and Amnon Shashua.

WednesdayNov 07, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Tamir Bendory Title:Estimation in extreme noise levels with application to cryo-electron microscopyAbstract:opens in new windowin html    pdfopens in new window

Single-particle cryo-electron microscopy (cryo-EM) is an innovative technology for elucidating structures of biological molecules at atomic-scale resolution. In a cryo-EM experiment, tomographic projections of a molecule, taken at unknown viewing directions, are embedded in highly noisy images at unknown locations. The cryo-EM problem is to estimate the 3-D structure of a molecule from these noisy images.

Inspired by cryo-EM, the talk will focus on two estimation problems: multi-reference alignment and blind deconvolution. These problems abstract away much of the intricacies of cryo-EM, while retaining some of its essential features. In multi-reference alignment, we aim to estimate a signal from its noisy, rotated observations. While the rotations and the signal are unknown, the goal is only to estimate the signal. In the blind deconvolution problem, the goal is to estimate a signal from its convolution with an unknown, sparse signal in the presence of noise. Focusing on the low SNR regime, I will propose the method of moments as a computationally efficient estimation framework for both problems and will introduce its properties. In particular, I will show that the method of moments allows estimating the sought signal accurately in any noise level, provided sufficiently many observations are collected, with only one pass over the data. I will then argue that the same principles carry through to cryo-EM, show examples, and draw potential implications.

TuesdayNov 06, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Erez LapidTitle:Results and conjectures about irreducibility of parabolic induction of representations of the general linear group over a local non-archimedean fieldAbstract:opens in new windowin html    pdfopens in new window

The representation theory of GL(n,F), F non-archimedean is a classical subject initiated by Bernstein and Zelevinsky in the 1970s.

I will review some recent results and conjectures which aim to characterize irreducibility of parabolic induction, in terms of geometry. Joint with Alberto Minguez

MondayNov 05, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Moni NaorTitle:White- box vs. black-box search problems: a cryptographic perspectiveAbstract:opens in new windowin html    pdfopens in new window

Ramsey theory assures us that in any graph there is a clique or independent set of a certain size, roughly logarithmic in the graph size. But how difficult is it to find the clique or independent set? If the graph is given explicitly, then it is possible to do so while examining a linear number of edges. If the graph is given by a black-box, where to figure out whether a certain edge exists the box should be queried, then a large number of queries must be issued. But what if one is given a program or circuit for computing the existence of an edge? What if we are assured that the program is small without being given explicitly?

In this talk I will explore recent work on the complexity of search problems with guaranteed solution (the class TFNP) and the tight relationship with cryptographic assumptions and techniques.

Based on joint works with Pavel Hubacek, Ilan Komargodski and Eylon Yogev

ThursdayNov 01, 201811:30
Computer Science Seminar
Speaker:Adi ShamirTitle:Machine Learning in Security: Applications and ImplicationsAbstract:opens in new windowin html    pdfopens in new windowWolfson Auditorium

In this talk I will survey the way machine learning research is affecting the field of security, and the way security research Is affecting the field of machine learning. After giving several examples of each type, I will discuss some of the latest developments In adversarial machine learning research, which are used by hackers to defeat security mechanisms based on regular machine learning techniques

ThursdayOct 25, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Daniel Dadush Title:Balancing vectors in any normAbstract:opens in new windowin html    pdfopens in new window

In the vector balancing problem, we are given N vectors v_1,..., v_N in an n-dimensional normed space, and our goal is to assign signs to them, so that the norm of their signed sum is as small as possible. The balancing constant of the vectors is the smallest number beta, such that any subset of the vectors can be balanced so that their signed sum has norm at most beta.
The vector balancing constant generalizes combinatorial discrepancy, and is related to rounding problems in combinatorial optimization, and to the approximate Caratheodory theorem. We study the question of efficiently approximating the vector balancing constant of any set of vectors, with respect to an arbitrary norm. We show that the vector balancing constant can be approximated in polynomial time to within factors logarithmic in the dimension, and is characterized by (an appropriately optimized version of) a known volumetric lower bound. Our techniques draw on results from geometric functional analysis and the theory of Gaussian processes. Our results also imply an improved approximation algorithm for hereditary discrepancy.
Joint work with Aleksandar Nikolov, Nicole Tomczak-Jaegermann and Kunal Talwar.

WednesdayOct 17, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Alexander Elashvili Title:Cyclic elements in semisimple Lie algebrasAbstract:opens in new windowin html    pdfopens in new window

In the talk I will tell about a theory of cyclic elements in semisimple Lie algebras. This notion was introduced by Kostant, who associated a cyclic element with the principal nilpotent and proved that it is regular semisimple. In particular, I will tell about the classification of all nilpotents giving rise to semisimple and regular semisimple cyclic elements. The results are from my joint work with V. Kac and E. Vinberg.

WednesdayOct 17, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Adam KapelnerTitle:Harmonizing Fully Optimal Designs with Classic Randomization in Fixed Trial ExperimentsAbstract:opens in new windowin html    pdfopens in new window

There is a movement in design of experiments away from the classic randomization put forward by Fisher, Cochran and others to one based on optimization. In fixed-sample trials comparing two groups, measurements of subjects are known in advance and subjects can be divided optimally into two groups based on a criterion of homogeneity or "imbalance" between the two groups. These designs are far from random. This talk seeks to understand the benefits and the costs over classic randomization in the context of different performance criterions such as Efron's worst-case analysis. In the criterion that we motivate, randomization beats optimization. However, the optimal design is shown to lie between these two extremes. Much-needed further work will provide a procedure to find this optimal designs in different scenarios in practice. Until then, it is best to randomize.

ThursdaySep 13, 201813:30
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Maria GorelikTitle:Bounded modules for finite-dimensional Lie superalgebras.Abstract:opens in new windowin html    pdfopens in new window
Let g be a basic classical Lie superalgebra. A weight module is called bounded if the dimensions of its weight spaces are uniformly bounded. Theorems of Fernando-Futorny and Dimitrov-Matheiu-Penkov reduce the classification of irreducible bounded modules to the classification of irreducible bounded highest weight modules L(\lambda). For Lie algebras the bounded modules L(\lambda) were classified by O. Mathieu. They exist only for the series A and C. For Lie superalgebras L(\lambda) have been classified in all cases except for five series of low-rank orthosymplectic superalgebras. Using the Enright functor, I will show how the boundness of L(\lambda) over g can be reduced to the boundness over simple Lie algebras and the orthosymplectic algebra osp(1|2n). This work is a joint project with D. Grantcharov.
ThursdaySep 13, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Maria GorelikTitle:Enright functor without long formulae.Abstract:opens in new windowin html    pdfopens in new window

I will define the Enright functor for contragredient Lie superalgebras and discuss its properties. If time permits, we may discuss a proof of Arakawa's Theorem for osp(1|2l).
This work is a joint project with V. Serganova.

ThursdaySep 06, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 1
Speaker:Arkady Berenstein Title:Noncommutative clustersAbstract:opens in new windowin html    pdfopens in new window

The goal of my talk (based on joint work with Vladimir Retakh) is to introduce noncommutative clusters and their mutations, which can be viewed as generalizations of both classical and quantum cluster structures.

Each noncommutative cluster S is built on a torsion-free group G and a certain collection of its automorphisms. We assign to S a noncommutative algebra A(S) related to the group algebra of G, which is an analogue of the cluster algebra, and establish a noncommutative version of Laurent Phenomenon in some algebras A(S).  

"Cluster groups" G for which the Noncommutative Laurent Phenomenon holds include triangular groups of marked surfaces (closely related to the fundamental groups of their ramified double covers), free group of rank 2, and principal noncommutative tori which exist for any exchange matrix B.

TuesdaySep 04, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Nir GadishTitle:Stable character theory and representation stability.Abstract:opens in new windowin html    pdfopens in new window

Various algebraic and topological situations give rise to compatible sequences of representations of different groups, such as the symmetric groups, with stable asymptotic behavior. Representation stability is a recent approach to studying such sequences, which has proved effective for extracting important invariants. Coming from this point of view, I will introduce the associated character theory, which formally explains many of the approach's strengths (in char 0). Central examples are simultaneous characters of all symmetric groups, or of all Gl(n) over some finite field. Their mere existence gives applications to statistics of random matrices over finite fields, and raises many combinatorial questions.

ThursdayAug 30, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Assaf Naor Title:Coarse (non)Universality of Alexandrov SpacesAbstract:opens in new windowin html    pdfopens in new window

We will show that there exists a metric space that does not admit a coarse embedding into any Alexandrov space of global nonpositive curvature, thus answering a question of Gromov (1993). In contrast, any metric space embeds coarsely into an Alexandorv space of nonnegative curvature. Based on joint works with Andoni and Neiman, and Eskenazis and Mendel.

TuesdayAug 28, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Mattias Jonsson Title:Degenerations of p-adic volume forms.Abstract:opens in new windowin html    pdfopens in new window

Let $X$ be an $n$-dimensional smooth projective variety over a non-Archimedean local field $K$, such as the $p$-adic numbers, and let $\omega$ be an global $n$-form on $X$. The set $X(K)$ of $K$-points on $X$ has the structure of a $K$-analytic manifold, and $\omega$ induces a measure $|\omega|$ on $X(K)$. For any finite extension $K'$ of $K$, there is a natural continuous map from $X(K')$ to the Berkovich analytification $X^{\mathrm{an}}$ of $X$. We study the asymptotics of the images of the measures $|\omega\otimes_KK'|$ on $X^{\mathrm{an}}$ as $K'$ runs through towers of finite extensions of $K$. This is joint work with Johannes Nicaise.

ThursdayAug 16, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:David Ellis Title:Random graphs with constant r-ballsAbstract:opens in new windowin html    pdfopens in new window

Let F be a fixed infinite, vertex-transitive graph. We say a graph G is `r-locally F' if for every vertex v of G, the ball of radius r and centre v in G is isometric to the ball of radius r in F. For each positive integer n, let G_n be a graph chosen uniformly at random from the set of all unlabelled, n-vertex graphs that are r-locally F. We investigate the properties that the random graph G_n has with high probability --- i.e., how these properties depend on the fixed graph F.

We show that if F is a Cayley graph of a torsion-free group of polynomial growth, then there exists a positive integer r_0 such that for every integer r at least r_0, with high probability the random graph G_n = G_n(F,r) defined above has largest component of size between n^{c_1} and n^{c_2}, where 0 < c_1 < c_2 < 1 are constants depending upon F alone, and moreover that G_n has a rather large automorphism group. This contrasts sharply with the random d-regular graph G_n(d) (which corresponds to the case where F is replaced by the infinite d-regular tree).

Our proofs use a mixture of results and techniques from group theory, geometry and combinatorics.

We obtain somewhat more precise results in the case where F is L^d (the standard Cayley graph of Z^d): for example, we obtain quite precise estimates on the number of n-vertex graphs that are r-locally L^d, for r at least linear in d.

Many intriguing open problems remain: concerning groups with torsion, groups with faster than polynomial growth, and what happens for more general structures than graphs.

This is joint work with Itai Benjamini (WIS).

TuesdayJul 31, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Roberto Rubio Title:On exceptional symmetric Gelfand pairsAbstract:opens in new windowin html    pdfopens in new window
A pair of a reductive linear algebraic group G and a subgroup H is said to be a Gelfand pair when, roughly speaking, the representation of G on $\mathcal{C}^\infty(G/H)$ is multiplicity free. Symmetric pairs, those where H is the fixed-point set of an involution on G, give many examples of Gelfand pairs. The Aizenbud-Gourevitch criterion, based on a previous distributional criterion by Gelfand and Kazhdan, was introduced to prove that many classical symmetric pairs are Gelfand pairs. For complex symmetric pairs, it says that the Gelfand property holds if the pair and all its descendants (centralizers of admissible semisimple elements) satisfy a certain regularity condition (expressed in terms of invariant distributions). In this talk we will focus on the twelve exceptional complex symmetric pairs and combine the Aizenbud-Gourevitch criterion with Lie-theoretic techniques. We will first introduce the concept of a pleasant pair, which will allow us to prove regularity for many pairs. We will then show how to compute descendants visually, thanks to the Satake diagram. The combination of these results with the criterion yields that nine out of the twelve pairs are Gelfand, and that the Gelfand property for the remaining three is equivalent to the regularity of one exceptional and two classical symmetric pairs.
TuesdayJul 31, 201811:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Edriss S. TitiTitle:On Recent Advances of the 3D Euler Equations by Means of ExamplesAbstract:opens in new windowin html    pdfopens in new window

abstract

ThursdayJul 26, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Eliran Subag Title:Free energy landscapes in spherical spin glassesAbstract:opens in new windowin html    pdfopens in new window

I will describe a new approach to the study of spherical spin glass models via free energy landscapes, defined by associating to interior points of the sphere the free energy computed only over the spherical band around them.
They are closely related to several fundamental objects from spin glass theory: the TAP free energy, pure states decomposition, overlap distribution, and temperature chaos. I will explain some of of those connections.

TuesdayJul 17, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Anthony JosephTitle:Some comments on the lowest degree appearances of representations.Abstract:opens in new windowin html    pdfopens in new window

TBA

TuesdayJul 10, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Anthony JosephTitle:Some comments on the lowest degree appearances of representations.Abstract:opens in new windowin html    pdfopens in new window

TBA

ThursdayJul 05, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Jonathan Hermon Title:The exclusion process (usually) mixes faster than independent particles.Abstract:opens in new windowin html    pdfopens in new window
The exclusion process is one of the most basic and best studied processes in the literature on interacting particle systems, with connections to card shuffling and statistical mechanics. It has been one of the major examples driving the study of mixing-times. In the exclusion process on an n-vertex graph we have k black particles and n-k white particles, one per site. Each edge rings at rate 1. When an edge rings, the particles occupying its end-points switch positions. Oliveira conjectured that the order of the mixing time of the process is at most that of the mixing-time of k independent particles. Together with Richard Pymar we verify this up to a constant factor for d-regular (or bounded degree) graphs 1 in various cases: (1) the degree d is at least logarithmic in n, or (2) the spectral-gap of a single walk is small (at most log number of vertices to the power 4) or (3) when the number of particles k is roughly n^a for some constant 0 n^c $) is within a $\log \log n$ factor from Oliveira's conjecture. As applications we get new mixing bounds: (a) $O(\log n \log \log n)$ for expanders, (b) order $ \log (dk) $ for the hypercube ${0,1}^d$ and (c) order $(diameter)^2 \log k $ for vertex-transitive graphs of moderate growth and for the giant component of supercritical percolation on a torus
ThursdayJul 05, 201812:15
Vision and AIRoom 1
Speaker:Netalee Efrat Title:Beyond the limitations of sensors and displaysAbstract:opens in new windowin html    pdfopens in new window

In this talk I explore various limitations of sensors and displays, and suggest new ways to overcome them.
These include:

  1. The limited 2D displays of today's screens - I will show how we can design new displays that enable us to see in 3D *without* any glasses.
  2. The limited spatial resolution of images - I will discuss the crucial factors for successful Super-Resolution.
  3. The poor image quality due to motion blur (due to camera motion or scene motion) - I will present a new approach for Blind Non-Uniform Deblurring.
WednesdayJul 04, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Alexander Volfovsky Title:Causal inference from complex observational data Abstract:opens in new windowin html    pdfopens in new window

A classical problem in causal inference is that of matching treatment units to control units in an observational dataset. This problem is distinct from simple estimation of treatment effects as it provides additional practical interpretability of the underlying causal mechanisms that is not available without matching. Some of the main challenges in developing matching methods arise from the tension among (i) inclusion of as many covariates as possible in defining the matched groups, (ii) having matched groups with enough treated and control units for a valid estimate of average treatment effect in each group, (iii) computing the matched pairs efficiently for large datasets, and (iv) dealing with complicating factors such as non-independence among units. We propose the Fast Large-scale Almost Matching Exactly (FLAME) framework to tackle these problems for categorical covariates. At its core this framework proposes an optimization objective for match quality that captures covariates that are integral for making causal statements while encouraging as many matches as possible. We demonstrate that this framework is able to construct good matched groups on relevant covariates and further extend the methodology to incorporate continuous and other complex covariates. related papers: https://arxiv.org/abs/1707.06315, https://arxiv.org/abs/1806.06802

TuesdayJul 03, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Moshe KamenskyTitle:Fields with free operators in positive characteristicAbstract:opens in new windowin html    pdfopens in new window

Moosa and Scanlon defined a general notion of  "fields with operators'', that generalizes those of difference and differential fields. In the case of  "free'' operators in characteristic zero they also analyzed the basic model-theoretic properties of the theory of such fields. In particular, they showed in this case the existence of the model companion, a construction analogous to that of algebraically closed fields for usual fields. In positive characteristic, they provided an example showing that the model companion need not exist.
 
I will discuss work, joint with Beyarslan, Hoffman and Kowalski, that completes the description of the free case, namely, it provides a full classification of those free operators for which the model companion exists. Though the motivating question is model theoretic, the description and the proof are completely algebraic and geometric. If time permits, I will discuss additional properties, such as quantifier elimination. All notions related to model theory and to fields with operators will be explained (at least heuristically).

SPECIAL NOTE: this will be part of a model theory day. Thus, the talk will be preceded by an introduction to algebraic geometry by the same speaker, 10-10:45 (in Room 1) and followed by a talk by Nick Ramsey " Classification Theory and the Construction of PAC Fields" , 14-16 (in Room 155). See https://mt972.weebly.com/ for more information

MondayJul 02, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Ariel Procaccia Title:Extreme DemocracyAbstract:opens in new windowin html    pdfopens in new window

Technological advances have changed every aspect of our lives in recent decades, yet, for the most part, the same systems of democratic decision making have been in place for centuries. I will argue that computer scientists can help rethink the practice of democracy, as well as its potential applications. I will focus on three emerging paradigms that go far beyond your run-of-the-mill election: (i) liquid democracy, an approach that allows voters to transitively delegate their votes; (ii) participatory budgeting, whereby residents collectively decide how to spend their local government's budget; and (iii) virtual democracy, which employs instant elections among machine learning models of real voters to address the grand AI challenge of ethical decision making.

ThursdayJun 28, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Omer BobrowskiTitle:Homological connectivity and percolation in random geometric complexesAbstract:opens in new windowin html    pdfopens in new window
Connectivity and percolation are two well studied phenomena in random graphs. In this talk we will discuss higher-dimensional analogues of connectivity and percolation that occur in random simplicial complexes. Simplicial complexes are a natural generalization of graphs, consisting of vertices, edges, triangles, tetrahedra, and higher dimensional simplexes. We will mainly focus on random geometric complexes. These complexes are generated by taking the vertices to be a random point process, and adding simplexes according to their geometric configuration. Our generalized notions of connectivity and percolation use the language of homology - an algebraic-topological structure representing cycles of different dimensions. In this talk we will review some recent progress in characterizing and analyzing these phenomena, as well as describing related phase transitions.
ThursdayJun 28, 201812:15
Vision and AIRoom 1
Speaker:Ariel EphratTitle:Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech SeparationAbstract:opens in new windowin html    pdfopens in new window

We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to "focus" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over state-of-the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest).

Joint work with Inbar Mosseri, Oran Lang, Tali Dekel, Kevin Wilson, Avinatan Hassidim, Bill Freeman and Miki Rubinstein of Google Research.

WednesdayJun 27, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Tamir Bendory Title:Estimation in low SNR environment and under group action with application to cryo-EMAbstract:opens in new windowin html    pdfopens in new window

Cryo-electron microscopy (cryo-EM) is an imaging technology that is revolutionizing structural biology, enabling reconstruction of molecules at near-atomic resolution.  
Cryo-EM produces a large number of noisy two-dimensional tomographic projection images of a molecule, taken at unknown viewing directions. 
The extreme levels of noise make classical tasks in statistics and signal processing, such as alignment, detection and clustering, very challenging. 
I will start the talk by studying the multi-reference alignment problem, which can be interpreted as a simplified model for cryo-EM. In multi-reference alignment, we aim to estimate multiple signals from circularly-translated, unlabeled, noisy copies. 
In high noise regimes, the measurements cannot be aligned or clustered. Nonetheless, accurate and efficient estimation can be achieved via group-invariant representations (invariant polynomials). Furthermore, such estimators achieve the optimal estimation rate. 
Then, I will show how this framework can be applied to the problem of 2-D classification in cryo-EM. In the last part of the talk, I will introduce the analog invariants of the cryo-EM problem and discuss how they can be used for ab initio modeling.
 

TuesdayJun 26, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Siddhartha SahiTitle:The Capelli eigenvalue problem for Lie superalgebrasAbstract:opens in new windowin html    pdfopens in new window
TBA
TuesdayJun 26, 201811:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Horacio G. RotsteinTitle:Network resonance: impedance interactions and frequency response alternating maps (FRAM)Abstract:opens in new windowin html    pdfopens in new window

generation of neuronal network oscillations are not well understood. We view this process as the individual neurons' oscillations being communicated among the nodes in the network, mediated by the impedance profiles of the isolated (uncoupled) individual neurons. In order to test this idea, we developed a mathematical tool that we refer to as the Frequency Response Alternating Map (FRAM). The FRAM describes how the impedances of the individual oscillators interact to generate network responses to oscillatory inputs. It consists of decoupling the non-autonomous part of the coupling term and substituting the reciprocal coupling by a sequence of alternating one-directional forcing effects (cell 1 forces cell 2, which in turn forces cell 1 and so on and so forth). The end result is an expression of the network impedance for each node (in the network) as power series, each term involving the product of the impedances of the autonomous part of the individual oscillators. For linear systems we provide analytical expressions of the FRAM and we show that its convergence properties and limitations.  We illustrate numerically that this convergence is relatively fast. We apply the FRAM to the phenomenon of network resonance to the simplest type of oscillatory network:  two non-oscillatory nodes receiving oscillatory inputs in one or the two nodes. We discuss extensions of the FRAM to include non-linear systems and other types of network architectures.

MondayJun 25, 201816:00
Machine Learning and Statistics SeminarRoom 155
Speaker:Ariel Jaffe / Phd defenceTitle:Spectral methods for unsupervised ensemble learning and latent variable modelsAbstract:opens in new windowin html    pdfopens in new windowNOTE SPECIAL DATE, TIME AND ROOM

The use of latent variables in probabilistic modeling is a standard approach in numerous data analysis applications. In recent years, there has been a surge of interest in spectral methods for latent variable models, where inference is done by analyzing the lower order moments of the observed data. In contrast to iterative approaches such as the EM algorithm, under appropriate conditions spectral methods are guaranteed to converge to the true model parameters given enough data samples.
The focus of the seminar is the development of novel spectral based methods for two problems in statistical machine learning. In the first part, we address unsupervised ensemble learning, where one obtains predictions from different sources or classifiers, yet without knowing the reliability and expertise of each source, and with no labeled data to directly assess it. We develop algorithms to estimate the reliability of the classifiers based on a common assumption that different classifiers make statistically independent errors. In addition, we show how one can detect subsets of classifiers that strongly violate the model of independent errors, in a fully unsupervised manner.
In the second part of the seminar we show how one can use spectral methods to learn the parameters of binary latent variable models. This model has many applications such as overlapping clustering and Gaussian-Bernoulli restricted Boltzmann machines. Our methods are based on computing the eigenvectors of both the second and third moments of the observed variables.
For both problems, we show that spectral based methods can be applied effectively, achieving results that are state of the art in various problems in computational biology and population genetics.

ThursdayJun 21, 201812:15
Vision and AIRoom 1
Speaker:Danielle Ezuz Title:Reversible Harmonic MapsAbstract:opens in new windowin html    pdfopens in new window

Information transfer between triangle meshes is of great importance in computer graphics and geometry processing. To facilitate this process, a smooth and accurate map is typically required between the two meshes. While such maps can sometimes be computed between nearly-isometric meshes, the more general case of meshes with diverse geometries remains challenging. This talk describes a novel approach for direct map computation between triangle meshes without mapping to an intermediate domain, which optimizes for the harmonicity and reversibility of the forward and backward maps. Our method is general both in the information it can receive as input, e.g. point landmarks, a dense map or a functional map, and in the diversity of the geometries to which it can be applied. We demonstrate that our maps exhibit lower conformal distortion than the state-of-the-art, while succeeding in correctly mapping key features of the input shapes.

TuesdayJun 19, 201811:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Nadav DymTitle:Linear algorithms for computing conformal mappingsAbstract:opens in new windowin html    pdfopens in new window

(joint with Noam Aigerman, Raz Sluzky and Yaron Lipman)
Computing homeomorphisms between surfaces is an important task in shape analysis fields such as computer graphics, medical imaging and  morphology. A fundamental tool for these tasks is solving Dirichlet's problem on an arbitrary Jordan domain with disc topology, where the boundary of the domain is mapped homeomorphically to the boundary of a specific target domain: A convex polygon. By the Rado-Kneser-Choquet Theorem such harmonic mappings are homeomorphisms onto the convex polygon. Standard finite element approximations of harmonic mappings lead to  discrete harmonic mappings, which have been proven to be homeomorphisms as well. Computing the discrete harmonic mappings is very efficient and reliable as the mappings are obtained as the solution of a sparse linear system of equations.

In this talk we show that the methodology above, can be used to compute *conformal* homeomorphisms, for domains with either disc or sphere topology:

By solving Dirichlet's problem with correct boundary conditions, we can compute conformal homeomorphisms from arbitrary Jordan domains to a specific canonical domain- a triangle. The discrete conformal mappings we compute are homeomorphisms, and approximate the conformal homeomorphism uniformly and in H^1. Similar methodology can also be used to conformally map a sphere type surface to a planar Jordan domain, whose edges are identified so that the planar domain has the topology of a sphere.

ThursdayJun 14, 201816:00
Special Guest LectureRoom 1
Speaker:Don ZagierTitle:Units, K-theory, and knotsAbstract:opens in new windowin html    pdfopens in new windowPLEASE NOTE THE CHANGE IN TIME

  In knot theory and more generally the theory of 3-manifolds various "quantum invariants" like the Witten-Reshetikhin-Turaev or the Kashaev invariant have been much studied in recent years,  in particular because of the famous "volume conjecture" related to the asymptotic growth of the Kashaev invariant.  Rather surprisingly, it transpired a few years ago that these invariants  also have very non-trivial number-theoretical properties, including a kind of weak invariance under the modular group SL(2,Z) ("quantum modular forms") and the experimental discovery of the appearance of certain units in cyclotomic extensions as factors in the asymptotic expansions.  The talk will report on this and specifically on recent joint work with Frank Calegari and Stavros Garoufalidis that constructs such units in a purely algebraic way starting from elements in algebraic K-theory or in the more elementary "Bloch group".  As an unexpected application, this result led to the proof of a well-known conjecture of Nahm on the modularity of certain q-hypergeometric series.

ThursdayJun 14, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Emanuel Milman Title:The Gaussian Double-Bubble and Multi-Bubble ConjecturesAbstract:opens in new windowin html    pdfopens in new window

The classical Gaussian isoperimetric inequality, established in the 70's independently by Sudakov-Tsirelson and Borell, states that the optimal way to decompose $\mathbb{R}^n$ into two sets of prescribed Gaussian measure, so that the (Gaussian) area of their interface is minimal, is by using two complementing half-planes. This is the Gaussian analogue of the classical Euclidean isoperimetric inequality, and is therefore referred to as the "single-bubble" case.

A natural generalization is to decompose $\mathbb{R}^n$ into $q \geq 3$ sets of prescribed Gaussian measure. It is conjectured that when $q \leq n+1$, the configuration whose interface has minimal (Gaussian) area is given by the Voronoi cells of $q$ equidistant points. For example, for $q=3$ (the "double-bubble"conjecture) in the plane ($n=2$), the interface is conjectured to be a "tripod" or "Y" - three rays meeting at a single point in 120 degree angles. For $q=4$ (the "triple-bubble" conjecture) in $\mathbb{R}^3$, the interface is conjectured to be a tetrahedral cone.

We confirm the Gaussian double-bubble and, more generally, multi-bubble conjectures for all $3 \leq q \leq n+1$. The double-bubble case $q=3$ is simpler, and we will explain why.

None of the numerous methods discovered over the years for establishing the classical $q=2$ case seem amenable to the $q \geq 3$ cases, and our method consists of establishing a Partial Differential Inequality satisfied by the isoperimetric profile. To treat $q > 3$, we first prove that locally minimimal configurations must have flat interfaces, and thus convex polyhedral cells. Uniqueness of minimizers up to null-sets is also established.

This is joint work with Joe Neeman (UT Austin).

TuesdayJun 12, 201811:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Adi Ditkowski Title:Error inhibiting schemes for differential equations Abstract:opens in new windowin html    pdfopens in new window

Typically, when semi-discrete approximations to time-dependent partial differential equations (PDE) or schemes for ordinary differential equation (ODE) are constructed they are derived such that they are stable and have a specified truncation error $\tau$. Under these conditions, the Lax-Richtmyer equivalence theorem assures that the scheme converges and that the error is, at most, of the order of $|| \tau ||$. In most cases, the error is in indeed of the order of $|| \tau ||$. We demonstrate that schemes can be constructed, whose truncation errors are $\tau$, however, the actual errors are much smaller. This error reduction is made by constructing the schemes such that they inhibit the accumulation of the local errors; therefore they are called Error Inhibiting Schemes (EIS).

MondayJun 11, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Roy Schwartz Title:Trees for Vertex Cuts, Hypergraph Cuts and Minimum Hypergraph BisectionAbstract:opens in new windowin html    pdfopens in new window

In the Minimum Hypergraph Bisection problem, the vertex set of a hypergraph has to be partitioned into two parts of equal size so that the number of hyperedges intersecting both parts is minimized.
This problem is a natural generalization of the well-studied Minimum Bisection problem in graphs.
We present a sharp distinction between Minimum Bisection in hypergraphs and graphs.
Whereas it is well-known that all bi-criteria approximation algorithms for Minimum Bisection in graphs can be extended to hypergraphs with the exact same guarantees, we prove that this is not the case when considering true (i.e., non bi-criteria) approximation algorithms.
Specifically, we show that Minimum Hypergraph Bisection admits an $\tilde{\mathcal{O}}(\sqrt{n})$ approximation algorithm.
However, we also show that any $\alpha$-approximation algorithm for Minimum Hypergraph Bisection implies an approximation of $\Omega(\alpha^{-2})$ for Densest $k$-Subgraph.
Thus, assuming the exponential time hypothesis there is no efficient approximation algorithm for Minimum Hypergraph Bisection with an approximation ratio $n^{poly(\log{\log{n}})}$.

In particular, Minimum Hypergraph Bisection is much harder to approximate than Minimum Bisection in graphs, for which a logarithmic approximation algorithm exists.
If time permits, the problem of constructing trees that are cut sparsifiers for hypergraph and vertex cuts will also be discussed.
While similar trees lie at the heart of powerful algorithms for Minimum Bisection in graphs, we prove that this is not the case for hypergraphs.
Joint work with Harald R\"{a}cke and Richard Stotz.

ThursdayJun 07, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Ohad Feldheim (HUJI) and Eviatar Procaccia (Texas A&M) Title:Double seminarAbstract:opens in new windowin html    pdfopens in new window

Ohad Feldheim: Convergence of a quantile admission processes
Abstract: Consider the following stochastic model for a growing set. At time 0 the model consists of the singleton S = {-infty}. At every subsequent time, two i.i.d. samples, distributed according to some distribution D on R, are suggested as candidates for S. If the smaller among the two is closer to at least a fraction of r of the current elements of S (in comparison with the larger one), then it is admitted into S.
How will the distribution of the members of S evolve over time as a function of r and D?
This model was suggested by Alon, Feldman, Mansour, Oren and Tennenholtz as a model for the evolution of an exclusive social group. We'll show that the empirical distribution of the elements of S converges to a (not-necessarily deterministic) limit distribution for any r and D.
This we do by relating the process to a random walk in changing environment. The analysis of this random walk involves various classical exponential concentration inequalities as well as a new general inequality concerning mean and minimum of independent random variables.
Joint work with Naomi Feldheim

Eviatar Procaccia:  Stabilization of Diffusion Limited Aggregation in a Wedge
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.

ThursdayJun 07, 201812:15
Vision and AIRoom 1
Speaker:Mark Sheinin Title:Leveraging Hidden Structure in Unstructured IlluminationAbstract:opens in new windowin html    pdfopens in new window

Artificial illumination plays a vital role in human civilization. In computer vision, artificial light is extensively used to recover 3D shape, reflectance, and further scene information. However, in most computer vision applications using artificial light, some additional structure is added to the illumination to facilitate the task at hand. In this work, we show that the ubiquitous alternating current (AC) lights already have a valuable inherent structure that stems from bulb flicker. By passively sensing scene flicker, we reveal new scene information which includes: the type of bulbs in the scene, the phases of the electric grid up to city scale, and the light transport matrix. This information yields unmixing of reflections and semi-reflections, nocturnal high dynamic range, and scene rendering with bulbs not observed during acquisition. Moreover, we provide methods that enable capturing scene flicker using almost any off-the-shelf camera, including smartphones.

In underwater imaging, similar structures are added to illumination sources to overcome the limited imaging range. In this setting, we show that by optimizing camera and light positions while taking the effect of light propagation in scattering media into account we achieve superior imaging of underwater scenes while using simple, unstructured illumination.

MondayJun 04, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Or SattathTitle:Quantum Tokens for Digital SignaturesAbstract:opens in new windowin html    pdfopens in new window

The fisherman caught a quantum fish. "Fisherman, please let me go", begged the fish, "and I will grant you three wishes". The fisherman agreed. The fish gave the fisherman a quantum computer, three quantum signing tokens and his classical public key. The fish explained: "to sign your three wishes, use the tokenized signature scheme on this quantum computer, then show your valid signature to the king, who owes me a favor".
The fisherman used one of the signing tokens to sign the document "give me a castle!" and rushed to the palace. The king executed the classical verification algorithm using the fish's public key, and since it was valid, the king complied.
The fisherman's wife wanted to sign ten wishes using their two remaining signing tokens. The fisherman did not want to cheat, and secretly sailed to meet the fish. "Fish, my wife wants to sign ten more wishes". But the fish was not worried: "I have learned quantum cryptography following the previous story (The Fisherman and His Wife by the brothers Grimm). The quantum tokens are consumed during the signing. Your polynomial wife cannot even sign four wishes using the three signing tokens I gave you".
"How does it work?" wondered the fisherman. "Have you heard of quantum money? These are quantum states which can be easily verified but are hard to copy. This tokenized quantum signature scheme extends Aaronson and Christiano's quantum money scheme, and a variant by Zhandry, which is why the signing tokens cannot be copied".
"Does your scheme have additional fancy properties?" the fisherman asked. "Yes, the scheme has other security guarantees: revocability, testability and everlasting security. Furthermore, if you're at sea and your quantum phone has only classical reception, you can use this scheme to transfer the value of the quantum money to shore", said the fish, and swam away.

Joint work with Shalev Ben-David. https://arxiv.org/abs/1609.09047

ThursdayMay 31, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Bhaswar Bhattacharya (UPenn) and Elliot Paquette (Ohio) Title:Double SeminarAbstract:opens in new windowin html    pdfopens in new window

Bhaswar Bhattacharya: Large Deviation Variational Problems in Random Combinatorial Structures

The upper tail problem in the Erdos-Renyi random graph $G\sim\mathcal{G}_{n,p}$, where every edge is included independently with probability $p$, is to estimate the probability that the number of copies of a graph $H$ in $G$ exceeds its expectation by a factor of $1+\delta$. The arithmetic analog of this problem counts the number of $k$-term arithmetic progressions in a random subset of $\{1, 2, \ldots, N\}$, where every element is included independently with probability $p$. The recently developed framework of non-linear large deviations (Chatterjee and Dembo (2016) and Eldan (2017)) shows that the logarithm of these tail probabilities can be reduced to a natural variational problem on the space of weighted graphs/functions. In this talk we will discuss methods for solving these variational problems in the sparse regime ($p \rightarrow 0$), and show how the solutions are often related to extremal problems in combinatorics. (This is based on joint work with Shirshendu Ganguly, Eyal Lubetzky, Xuancheng Shao, and Yufei Zhao.)

Elliot Paquette: Random matrix point processes via stochastic processes

In 2007, Virag and Valko introduced the Brownian carousel, a dynamical system that describes the eigenvalues of a canonical class of random matrices. This dynamical system can be reduced to a diffusion, the stochastic sine equation, a beautiful probabilistic object requiring no random matrix theory to understand. Many features of the limiting eigenvalue point process, the Sine--beta process, can then be studied via this stochastic process. We will sketch how this stochastic process is connected to eigenvalues of a random matrix and sketch an approach to two questions about the stochastic sine equation: deviations for the counting Sine--beta counting function and a functional central limit theorem.

TuesdayMay 29, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Max Gurevich Title:Branching laws for non-generic representationsAbstract:opens in new windowin html    pdfopens in new window

The celebrated Gan-Gross-Prasad conjectures aim to describe the branching behavior of representations of classical groups, i.e., the decomposition of irreducible representations when restricted to a lower rank subgroup.

These conjectures, whose global/automorphic version bear significance in number theory, have thus far been formulated and resolved for the generic case.

In this talk, I will present a newly formulated rule in the p-adic setting (again conjectured by G-G-P) for restriction of representations in non-generic Arthur packets of GL_n.

Progress towards the proof of the new rule takes the problem into the rapidly developing subject of quantum affine algebras. These techniques use a version of the Schur-Weyl duality for affine Hecke algebras, combined with new combinatorial information on parabolic induction extracted by Lapid-Minguez.

MondayMay 28, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Kevin Leyton-Brown Title:Learning as a Tool for Algorithm Design and Beyond-Worst-Case AnalysisAbstract:opens in new windowin html    pdfopens in new window

All known algorithms for solving NP-complete problems require exponential time in the worst case; however, these algorithms nevertheless solve many problems of practical importance astoundingly quickly, and are hence relied upon in a broad range of applications. This talk is built around the observation that "Empirical Hardness Models" - statistical models that predict algorithm runtime on novel instances from a given distribution - work surprisingly well. These models can serve as powerful tools for algorithm design, specifically by facilitating automated methods for algorithm design and for constructing algorithm portfolios. They also offer a statistical alternative to beyond-worst-case analysis and a starting point for theoretical investigations.

bio at http://www.cs.ubc.ca/~kevinlb/bio.html

 

TuesdayMay 22, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 155
Speaker:Itay Glazer Title:On singularity properties of convolution of algebraic morphisms Abstract:opens in new windowin html    pdfopens in new window

In analysis, a convolution of two functions usually results in a smoother, better behaved function. Given two morphisms f,g from algebraic varieties X,Y to an algebraic group G, one can define a notion of convolution of these morphisms. Analogously to the analytic situation, this operation yields a morphism (from X x Y to G) with improved smoothness properties.

In this talk, I will define a convolution operation and discuss some of its properties. I will then present a recent result; if G is an algebraic group, X is smooth and absolutely irreducible, and f:X-->G is a dominant map, then after finitely many self convolutions of f, we obtain a morphism with the property of being flat with fibers of rational singularities (a property which we call (FRS)).

Moreover, Aizenbud and Avni showed that the (FRS) property has an equivalent analytic characterization, which leads to various applications such as counting points of schemes over finite rings, representation growth of certain compact p-adic groups and arithmetic groups of higher rank, and random walks on (algebraic families of) finite groups. We will discuss some of these applications, and maybe some of the main ideas of the proof of the above result.

Joint with Yotam Hendel.

ThursdayMay 17, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Ron Peled Title:The fluctuations of random surfacesAbstract:opens in new windowin html    pdfopens in new window

Random surfaces in statistical physics are commonly modeled by a real-valued function phi on a lattice, whose probability density penalizes nearest-neighbor fluctuations. Precisely, given an even function V, termed the potential, the energy H(phi) is computed as the sum of V over the nearest-neighbor gradients of phi, and the probability density of phi is set proportional to exp(-H(phi)). The most-studied case is when V is quadratic, resulting in the so-called Gaussian free field. Brascamp, Lieb and Lebowitz initiated in 1975 the study of the global fluctuations of random surfaces for other potential functions and noted that understanding is lacking even in the case of the quartic potential, V(x)=x^4. We will review the state of the art for this problem and present recent work with Alexander Magazinov which finally settles the question of obtaining upper bounds for the fluctuations for the quartic and many other potential functions.

ThursdayMay 17, 201812:15
Vision and AIRoom 1
Speaker:Matan Sela Title:Deep Semi-supervised 3D Face ReconstructionAbstract:opens in new windowin html    pdfopens in new window

Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications in computer vision and graphics. We propose to leverage the power of convolutional neural networks (CNNs) to produce highly detailed face reconstruction directly from a single image. For this purpose, we introduce an end-to-end CNN framework which constructs the shape in a coarse-to-fine fashion. The proposed architecture is composed of two main blocks, a network which recovers the coarse facial geometry (CoarseNet), followed by a CNN which refines the facial features of that geometry (FineNet). To alleviate the lack of training data for face reconstruction, we train our model using synthetic data as well as unlabeled facial images collected from the internet. The proposed model successfully recovers facial geometries from real images, even for faces with extreme expressions and under varying lighting conditions. In this talk, I will summarize three papers that were published at 3DV 2016, CVPR 2017 (as an oral presentation), and ICCV 2017.

Bio: Matan Sela holds a Ph.D in Computer Science from the Technion - Israel Institute of Technology. He received B.Sc. and M.Sc. (both with honors) in electrical engineering, both from The Technion - Israel Institute of Technology in 2012 and 2015, respectively. During summer 2017, he was a research intern at Google, Mountain View, California. His interests are Machine Learning, Computer Vision, Computer Graphics, Geometry Processing and any combination of thereof.

TuesdayMay 15, 201816:00
Seminar in Geometry and TopologyRoom 1
Speaker:Midory Komatsudanu QuispeTitle:On the Generic Rank of the Baum-Bott MapAbstract:opens in new windowin html    pdfopens in new window

The Baum-Bott indexes are important local invariants for singular holomorphic foliations by curves with isolated singularities. On a compact complex manifold, we consider foliations with fixed cotangent bundle. Then, the Baum-Bott map associates to a foliation its Baum-Bott indexes on its singularities. We focus on foliations on the projective space and we are interested in the generic rank of that map. The generic rank for foliations on the projective plane is known. For high-dimensional projectives spaces, we give an upper bound and in some cases we determine the generic rank.

TuesdayMay 15, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Leonard Schulman Title:Explicit Binary Tree Codes with Polylogarithmic Size AlphabetAbstract:opens in new windowin html    pdfopens in new windowNOTE SPECIAL DAY (TUESDAY)

This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size.

For every constant delta < 1 we give an explicit binary tree code with distance delta and alphabet size poly(log n), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n).

As part of the analysis, we prove a bound on the number of positive integer roots a real polynomial can have in terms of its sparsity with respect to the Newton basis---a result of independent interest.

Joint work with G. Cohen and B. Haeupler

ThursdayMay 10, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:David Jerison (MIT) and Ron Rosenthal (Technion)Title:DOUBLE TALKAbstract:opens in new windowin html    pdfopens in new window

David Jerison: Localization of eigenfunctions via an effective potential
We discuss joint work with Douglas Arnold, Guy David, Marcel Filoche and Svitlana Mayboroda. Consider the Neumann boundary value problem for the operator $L u = - \mbox{div} (A \nabla u) + Vu$ on a Lipschitz domain $\Omega$ and, more generally, on a manifold with or without boundary. The eigenfunctions of $L$ are often localized, as a result of disorder of the potential $V$, the matrix of coefficients $A$, irregularities of the boundary, or all of the above. In earlier work, Filoche and Mayboroda introduced the function $u$ solving $Lu = 1$, and showed numerically that it strongly reflects this localization. Here, we deepen the connection between the eigenfunctions and this {\em landscape} function $u$ by proving that its reciprocal $1/u$ acts as an {\em effective potential}. The effective potential governs the exponential decay of the eigenfunctions of the system and delivers information on the distribution of eigenvalues near the bottom of the spectrum.

Ron Rosenthal: Eigenvector correlation in the complex Ginibre ensemble
The complex Ginibre ensemble is a non-Hermitian random matrix on C^N with i.i.d. complex Gaussian entries normalized to have mean zero and variance 1=N. Unlike the Gaussian unitary ensemble, for which the eigenvectors are orthogonal, the geometry of the eigenbases of the Ginibre ensemble are not particularly well understood. We will discuss a some results regarding the analytic and algebraic structure of eigenvector correlations in this matrix ensemble. In particular, we uncover an extended algebraic structure which describes the asymptotic behavior (as N goes to infinity) of these correlations. Our work extends previous results of Chalker and Mehlig [CM98], in which the correlation for pairs of eigenvectors was computed. Based on a joint work with Nick Crawford.

ThursdayMay 10, 201812:15
Vision and AIRoom 1
Speaker:Lihi Zelnik-ManorTitle:Maintaining Internal Image Statistics in Synthesized ImagesAbstract:opens in new windowin html    pdfopens in new window

Recent work has shown impressive success in automatically creating new images with desired properties such as transferring painterly style, modifying facial expressions or manipulating the center of attention of the image. In this talk I will discuss two of the standing challenges in image synthesis and how we tackle them:
- I will describe our efforts in making the synthesized images more photo-realistic.
- I will further show how we can broaden the scope of data that can be used for training synthesis networks, and with that provide a solution to new applications.

WednesdayMay 09, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Richard Olshen Title:V(D)J Diversity of Subsets of T and B Cells, Statistics and Probability Abstract:opens in new windowin html    pdfopens in new window

This talk will include an introduction to the topic of V(D)J rearrangements of particular subsets of T cells and B cells of the adaptive human immune system, in particular of IgG heavy chains. There are many statistical problems that arise in trying to understand these cells. They involve estimating aspects of functionals of discrete probabilities on (random) finite sets. Topics include but are not limited to exchangeability, estimating non-centrality parameters, and estimating covariance matrices from what are called "replicates" that have been amplified by the PCR process and (partially) sequenced.

I have received considerable assistance from Lu Tian, and also Yi Liu; as well, I have been helped considerably by Andrew Fire and Scott Boyd, and also Jorg Goronzy

TuesdayMay 08, 201811:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Leah Edelstein-KeshetTitle:Coupling mechanics and chemical signaling in cells: from single cell to the collectiveAbstract:opens in new windowin html    pdfopens in new window

Single cells and cells in a tissue respond to stimuli by deforming, changing their shape, and/or moving. Some of these responses can be understood from the underlying biochemical signaling, a topic that has been of interest to both biologists and modelers. In our recent work, my group has studied the link between mechanical tension on cells and their internal chemical signaling. (Our primary focus has been, and remains, that of Rho proteins.) Here I will describe a simple "toy model" that captures key aspects of what is known biologically. The model is simple enough to understand mathematically, and yet capable fo displaying several regimes of behavior consistent with experimental observations. I describe how we investigated the model in a single cell, and how this was then used to capture multiple cells that interact with one another mechanically. We find waves of expansion and contraction that sweep through the model "tissue" is certain parameter regimes. This work is joint with Cole Zmurchok and Dhanajay Bhaskar.

MondayMay 07, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Yuval Dagan Title:Detecting Correlations with Little Memory and CommunicationAbstract:opens in new windowin html    pdfopens in new window

We study the problem of identifying correlations in multivariate data, under information constraints:
Either on the amount of memory that can be used by the algorithm, or the amount of communi- cation when the data is distributed across several machines. We prove a tight trade-off between the memory/communication complexity and the sample complexity, implying (for example) that to detect pairwise correlations with optimal sample complexity, the number of required mem-ory/communication bits is at least quadratic in the dimension. Our results substantially improve those of Shamir (2014), which studied a similar question in a much more restricted setting. To the best of our knowledge, these are the first provable sample/memory/communication trade-offs for a practical estimation problem, using standard distributions, and in the natural regime where the memory/communication budget is larger than the size of a single data point. To derive our theorems, we prove a new information-theoretic result, which may be relevant for studying other information-constrained learning problems.
Joint work with Ohad Shamir

TuesdayMay 01, 201814:30
Foundations of Computer Science ColloquiumRoom Pekeris
Speaker:Igor Shinkar Title:Probabilistic Checking against Non-Signaling Strategies from Linearity TestingAbstract:opens in new windowin html    pdfopens in new windowNOTE UNUSUAL DATE AND ROOM

Non-signaling strategies are collections of distributions with certain non-local correlations that have been studied recently in the context of delegation of computation.
In this talk I will discuss a recent result showing that the Hadamard based PCP of [ALMSS'92] is sound against non-signaling strategies. As part of the proof, we study the classical problem of linearity testing [BLR'93] in the setting of non-signaling strategies, and prove that any no-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions.

Joint work with Alessandro Chiesa and Peter Manohar (UC Berkeley).

ThursdayApr 26, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 155
Speaker:Anirban Basak Title:Local weak limits of Ising and Potts measures on locally tree-like graphsAbstract:opens in new windowin html    pdfopens in new window

Consider a sequence of growing graphs converging locally weakly to an infinite (possibly random) tree. As there are uncountably many Ising and Potts Gibbs measures on the limiting tree in the low-temperature regime it is not apriori clear whether the local weak limit of such measures exists and if so, identifying the limit remains a challenge. In this talk, I will describe these limits. The talk is based on joint works with Amir Dembo and Allan Sly.

ThursdayApr 26, 201812:15
Vision and AIRoom 1
Speaker:Sharon Fogel (Tel-Aviv University) and Hadar Averbuch-Elor (Tel Aviv University and Amazon AI) Title:Clustering-driven Deep Embedding with Pairwise ConstraintsAbstract:opens in new windowin html    pdfopens in new window

Recently, there has been increasing interest to leverage the competence of neural networks to analyze data. In particular, new clustering methods that employ deep embeddings have been presented.

In this talk, we depart from centroid-based models and suggest a new framework, called Clustering-driven deep embedding with PAirwise Constraints (CPAC), for non-parametric clustering using a neural network. We present a clustering-driven embedding based on a Siamese network that encourages pairs of data points to output similar representations in the latent space. Our pair-based model allows augmenting the information with labeled pairs to constitute a semi-supervised framework. Our approach is based on analyzing the losses associated with each pair to refine the set of constraints.
We show that clustering performance increases when using this scheme, even with a limited amount of user queries.
We present state-of-the-art results on different types of datasets and compare our performance to parametric and non-parametric techniques.

MondayApr 23, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Lior Gishboliner Title:A Generalized Turan Problem and Its ApplicationsAbstract:opens in new windowin html    pdfopens in new window

Our first theorem is a hierarchy theorem for the query complexity of testing graph properties with one-sided error; more precisely, we show that for every sufficiently fast-growing function f from (0,1) to the natural numbers, there is a graph property whose one-sided-error query complexity is precisely f(\Theta(\epsilon)). No result of this type was previously known for any f which is super-polynomial. Goldreich [ECCC 2005] asked to exhibit a graph property whose query complexity is exponential in 1/\epsilon. Our hierarchy theorem partially resolves this problem by exhibiting a property whose one-sided-error query complexity is exponential in 1/\epsilon. We also use our hierarchy theorem in order to resolve a problem raised by Alon and Shapira [STOC 2005] regarding testing relaxed versions of bipartiteness.

Our second theorem states that for any function f there is a graph property whose one-sided-error query complexity is at least f(\epsilon) while its two-sided-error query complexity is only polynomial in 1/\epsilon. This is the first indication of the surprising power that two-sided-error testing algorithms have over one-sided-error ones, even when restricted to properties that are testable with one-sided error. Again, no result of this type was previously known for any f that is super-polynomial.

The above theorems are derived from a graph theoretic result which we think is of independent interest, and might have further applications. Alon and Shikhelman [JCTB 2016] introduced the following generalized Turan problem: for fixed graphs H and T, and an integer n, what is the maximum number of copies of T, denoted by ex(n,T,H), that can appear in an n-vertex H-free graph? This problem received a lot of attention recently, with an emphasis on T = C_3, H=C_{2m+1}. Our third theorem gives tight bounds for ex(n,C_k,C_m) for all the remaining values of k and m.
Joint work with Asaf Shapira.

TuesdayApr 17, 201811:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Cy Maor Title:Elasticity and curvature: the elastic energy of non-Euclidean thin bodiesAbstract:opens in new windowin html    pdfopens in new window

Non-Euclidean, or incompatible elasticity is an elastic theory for bodies that do not have a reference (stress-free) configuration. It applies to many systems, in which the elastic body undergoes inhomogeneous growth (e.g. plants, self-assembled molecules). Mathematically, it is a question of finding the "most isometric" immersion of a Riemannian manifold (M,g) into Euclidean space of the same dimension, by minimizing an appropriate energy functional.

Much of the research in non-Euclidean elasticity is concerned with elastic bodies that have one or more slender dimensions (such as leaves), and finding appropriate dimensionally-reduced models for them.

In this talk I will give an introduction to non-Euclidean elasticity, and then focus on thin bodies and present some recent results on the relations between their elastic behavior and their curvature.

Based on a joint work with Asaf Shachar.

MondayApr 16, 201814:30
Foundations of Computer Science ColloquiumRoom 155
Speaker:Iftach HaitnerTitle:Computational Two-Party Correlation Abstract:opens in new windowin html    pdfopens in new window

We prove a dichotomy theorem for two-party protocols, and show that for every poly-time two-party protocol with single-bit output, at least one of following holds:

  • The protocol can be used to construct a key-agreement protocol.
  • For every constant ρ > 0 the parties' output is ρ -uncorrelated: let (X; Y; T) denote the parties' outputs and the protocol's transcript respectively. A protocol is &rho -uncorrelated if there exists an efficient "decorralizer" algorithm Decor, that when given a random transcript T, produces two numbers PA; PB, such that no efficient algorithm can distinguish (UPS ;UPB ; T) (where Up denotes a biassed coin with bias ρ from (X; Y; T), with distinguishing advantage larger than ρ.

Namely, if the protocol cannot be used to construct key-agreement, then its output distribution (X; Y; T) is trivial: it can be simulated non-interactively by the parties given public randomness (used to sample T). (The precise statement also has qualifiers of the form: "on infinitely many choices of the security parameter").
We use the above characterization to prove that (α= 24ε2)-correct differentially private symmetric protocol for computing XOR, implies the existence of key-agreement protocol. The above dependency between α and &epsilon is tight since an θ( ε2)-correct "-differentially private protocol for computing XOR is known to exists unconditionally. It also improves, in the ( ε,α)dependency aspect, upon Goyal et al. [ICALP '16] who showed that, for some constant c > 0, a c-correct "-differentially private protocol for computing XOR implies oblivious transfer. Our result extends to a weaker notion of di erential privacy in which the privacy only requires to hold against external observer. Interestingly, the reductions used for proving the above results are non black box.

Joint work with: Eran Omri and Kobbi Nissim and Ronen Shaltiel and Jad Silbak

ThursdayApr 12, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 290C
Speaker:David Ellis (Queen Mary U) Benjamin Fehrman (Max Planck Institute)Title:** Double Seminar **Abstract:opens in new windowin html    pdfopens in new window

Talk 1: David Ellis (Queen Mary U)
Title: The edge-isoperimetric problem for antipodally symmetric subsets of the discrete cube.
Abstract: A major open problem in geometry is to solve the isoperimetric problem for n-dimensional real projective space, i.e. to determine, for each real number V, the minimum possible size of the boundary of a (well-behaved) set of volume V, in n-dimensional real projective space. We study a discrete analogue of this question: namely, among all antipodally symmetric subsets of {0,1}^n of fixed size, which sets have minimal edge-boundary? We obtain a complete answer to the second question. This is joint work with Imre Leader (Cambridge)

Talk 2:  Benjamin Fehrman (Max Planck Institute)
Title: Well-posedness of stochastic porous media equations with nonlinear, conservative noise.
Abstract: In this talk, which is based on joint work with Benjamin Gess, I will describe a pathwise well-posedness theory for stochastic porous media equations driven by nonlinear, conservative noise. Such equations arise in the theory of mean field games, as an approximation to the Dean-Kawasaki equation in fluctuating hydrodynamics, to describe the fluctuating hydrodynamics of a zero range process, and as a model for the evolution of a thin film in the regime of negligible surface tension. Our methods are loosely based on the theory of stochastic viscosity solutions, where the noise is removed by considering a class of test functions transported along underlying stochastic characteristics. We apply these ideas after passing to the equation's kinetic formulation, for which the noise enters linearly and can be inverted using the theory of rough paths.

ThursdayApr 12, 201812:15
Vision and AIRoom 1
Speaker:Yoav LevineTitle:Bridging Many-Body Quantum Physics and Deep Learning via Tensor NetworksAbstract:opens in new windowin html    pdfopens in new window

The harnessing of modern computational abilities for many-body wave-function representations is naturally placed as a prominent avenue in contemporary condensed matter physics. Specifically, highly expressive computational schemes that are able to efficiently represent the entanglement properties which characterize many-particle quantum systems are of interest. In the seemingly unrelated field of machine learning, deep network architectures have exhibited an unprecedented ability to tractably encompass the convoluted dependencies which characterize hard learning tasks such as image classification or speech recognition. However, theory is still lagging behind these rapid empirical advancements, and key questions regarding deep learning architecture design have no adequate answers. In the presented work, we establish a Tensor Network (TN) based common language between the two disciplines, which allows us to offer bidirectional contributions. By showing that many-body wave-functions are structurally equivalent to mappings of convolutional and recurrent arithmetic circuits, we construct their TN descriptions in the form of Tree and Matrix Product State TNs, and bring forth quantum entanglement measures as natural quantifiers of dependencies modeled by such networks. Accordingly, we propose a novel entanglement based deep learning design scheme that sheds light on the success of popular architectural choices made by deep learning practitioners, and suggests new practical prescriptions. Specifically, our analysis provides prescriptions regarding connectivity (pooling geometry) and parameter allocation (layer widths) in deep convolutional networks, and allows us to establish a first of its kind theoretical assertion for the exponential enhancement in long term memory brought forth by depth in recurrent networks. In the other direction, we identify that an inherent re-use of information in state-of-the-art deep learning architectures is a key trait that distinguishes them from TN based representations. Therefore, we suggest a new TN manifestation of information re-use, which enables TN constructs of powerful architectures such as deep recurrent networks and overlapping convolutional networks. This allows us to theoretically demonstrate that the entanglement scaling supported by state-of-the-art deep learning architectures can surpass that of commonly used expressive TNs in one dimension, and can support volume law entanglement scaling in two dimensions with an amount of parameters that is a square root of that required by Restricted Boltzmann Machines. We thus provide theoretical motivation to shift trending neural-network based wave-function representations closer to state-of-the-art deep learning architectures.

WednesdayApr 11, 201811:15
Machine Learning and Statistics SeminarRoom 1
Speaker:Ethan Fetaya Title:Neural Relational Inference for Interacting Systems Abstract:opens in new windowin html    pdfopens in new window

Interacting systems are prevalent in nature, from dynamical systems in physics to complex societal dynamics. In this talk I will introduce our neural relational inference model: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data. Our model takes the form of a variational auto-encoder, in which the latent code represents the underlying interaction graph and the reconstruction is based on graph neural networks.

WednesdayApr 11, 201811:00
Mathematical Analysis and Applications SeminarRoom Pekeris
Speaker:Jean-Pierre Eckmann Title:Breathers as Metastable States for the Discrete NLS equationAbstract:opens in new windowin html    pdfopens in new windowNOTE SPECIAL TIME AND PLACE

I will describe work with C.E.Wayne in which we study how dissipation leads to a very slow drift in secular parameters for the Non-linear Schroedinger equation. The nice thing about this model is that one can describe a relatively complex phenomenon in almost explicit terms as a perturbation from an integrable Hamiltonian system

MondayApr 09, 201814:30
Foundations of Computer Science ColloquiumRoom 290C
Speaker:Shlomi Dolev Title:Overlay Security and Quantum Safe Public Key InfrastructureAbstract:opens in new windowin html    pdfopens in new window

The advancement in quantum computing, where Google, IBM, Microsoft, Intel are competing in the (exponentially growing) number of qubits in their (some already) commercial quantum computers that they produce, requires the reexamination of the Internet Security, and the public key infrastructure. The talk will describe the concept of overlay security together with blockchain based directories for establishing symmetric keys. When combined with nested Lamport signature and Merkle trees for digital signatures the result is a complete, easily implementable architecture with information theoretically secure communication, and hash based signatures.

ThursdayMar 29, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 290C
Speaker:Mark Rudelson (UMich), Yinon Spinka (TAU)Title:*** Double Seminar ***Abstract:opens in new windowin html    pdfopens in new window

Mark Rudelson (UMich)
Title: Invertibility of the adjacency matrices of random graphs.
Abstract: Consider an adjacency matrix of a bipartite, directed, or undirected Erdos-Renyi random graph. If the average degree of a vertex is large enough, then such matrix is invertible with high probability. As the average degree decreases, the probability of the matrix being singular increases, and for a sufficiently small average degree, it becomes singular with probability close to 1. We will discuss when this transition occurs, and what the main reason for the singularity of the adjacency matrix is.
This is a joint work with Anirban Basak.

Yinon Spinka (TAU)
Title: Finitary codings of Markov random fields
Abstract: Let X be a stationary Z^d-process. We say that X can be coded by an i.i.d. process if there is a(deterministic and translation-invariant) way to construct a realization of X from i.i.d. variables associated to the sites of Z^d. That is, if there is an i.i.d. process Y and a measurable map F from the underlying space of Y to that of X, which commutes with translations of Z^d and satisfies that F(Y)=X in distribution. Such a coding is called finitary if, in order to determine the value of X at a given site, one only needs to look at a finite (but random) region of Y.
It is known that a phase transition (existence of multiple Gibbs states) is an obstruction for the existence of such a finitary coding. On the other hand, we show that when X is a Markov random field satisfying certain spatial mixing conditions, then X can be coded by an i.i.d. process in a finitary manner. Moreover, the coding radius has exponential tails, so that typically the value of X at a given site is determined by a small region of Y.
We give applications to models such as the Potts model, proper colorings and the hard-core model.

ThursdayMar 29, 201812:15
Vision and AIRoom 1
Speaker:Tomer MichaeliTitle:The Perception-Distortion TradeoffAbstract:opens in new windowin html    pdfopens in new window

Image restoration algorithms are typically evaluated by some distortion measure (e.g. PSNR, SSIM, IFC, VIF) or by human opinion scores that quantify perceived perceptual quality. In this work, we prove mathematically that distortion and perceptual quality are at odds with each other. Specifically, we study the optimal probability for correctly discriminating the outputs of an image restoration algorithm from real images. We show that as the mean distortion decreases, this probability must increase (indicating worse perceptual quality). As opposed to the common belief, this result holds true for any distortion measure, and is not only a problem of the PSNR or SSIM criteria. However, as we show experimentally, for some measures it is less severe (e.g. distance between VGG features). We also show that generative-adversarial-nets (GANs) provide a principled way to approach the perception-distortion bound. This constitutes theoretical support to their observed success in low-level vision tasks. Based on our analysis, we propose a new methodology for evaluating image restoration methods, and use it to perform an extensive comparison between recent super-resolution algorithms. Our study reveals which methods are currently closest to the theoretical perception-distortion bound.
* Joint work with Yochai Blau.

MondayMar 26, 201814:30
Foundations of Computer Science ColloquiumRoom 290C
Speaker:Gal Shahaf Title:Oblivious routing via random walksAbstract:opens in new windowin html    pdfopens in new window

We present novel oblivious routing algorithms for the splittable and the unsplittable multicommodity flow settings. Our algorithms for both models improve upon the state-of-the-art, in terms of running time and performance with respect to graphs that exhibit good expansion guarantees. As an intermediate step towards the unsplittable setting, we present a novel generalization of Valiant's classical load balancing scheme for packet-switched networks to arbitrary graphs, which is of independent interest. Our approach relies on diffusing traffic throughout the network and then regathering it to its destination, via iterative applications of the random walk operator. Consequently, the performance guarantees of our algorithms are derived from the convergence of the random walk operator to the stationary distribution and are expressed in terms of the spectral gap of the graph (which dominates the mixing time).

ThursdayMar 22, 201812:30
Special Guest SeminarRoom 108
Speaker:Mark BorodovskyTitle:Genome Decoding: Learning and Following Sequence PatternsAbstract:opens in new windowin html    pdfopens in new window

Genome-wide forces of mutation and selection create local and global sequence patterns that carry information on functional role of genomic DNA. I will describe algorithmic approaches to genome analysis that decode sequence patterns and identify features of structural gene organization and gene expression (e.g. leaderless transcription). The algorithms make unsupervised inference of the structure of underlying graphical model and its parameters. 
The subsequently developed software tools were, among other applications, used at NCBI (Bethesda, MD) to annotate and re-annotate 2,500+ fungal genomes and 130,000+ prokaryotic genomes. Yet another algorithm was employed by DOE JGI (Walnut Creek, CA) to annotate the largest collection of metagenomes. 

Speaker s short Bio
Mark Borodovsky, PhD, a Regents' Professor at the Joint Georgia Tech and Emory University Department of Biomedical Engineering and the School of Computational Science and Engineering. Since 1990 when he arrived to Atlanta from Moscow, Dr Borodovsky had also joint faculty appointments at the School of Biology and the School of Mathematics. Dr. Borodovsky is a Founder of the Georgia Tech Bioinformatics graduate program.
 

MondayMar 19, 201814:30
Foundations of Computer Science ColloquiumRoom 290C
Speaker:William Hoza Title:Typically-Correct Derandomization for Small Time and SpaceAbstract:opens in new windowin html    pdfopens in new window

Suppose a language L can be decided by a bounded-error randomized algorithm that runs in space S and time n * poly(S). We give a randomized algorithm for L that still runs in space O(S) and time n * poly(S) that uses only O(S) random bits; our algorithm has a low failure probability on all but a negligible fraction of inputs of each length. An immediate corollary is a deterministic algorithm for L that runs in space O(S) and succeeds on all but a negligible fraction of inputs of each length. We also discuss additional complexity-theoretic applications of our technique.

SundayMar 18, 201812:15
Vision and AIRoom 1
Speaker:Amir RosenfeldTitle:Striving for Adaptive Representations in Neural NetworksAbstract:opens in new windowin html    pdfopens in new windowPlease note the Special Date!!
Some sub-problems in visual recognition already enjoy very impressive performance. However, the deep-learning solutions that underlie them require large training data, are brittle to domain shift and incur a large cost in parameters for adapting to new domains - all in stark contrast to what is observed in human beings. I will talk about my recent work on this area, including (1) introduce a new dataset on which the strongest of learned representations perform very poorly in mimicking human perceptual similarity (2) discuss recent results hinting that the parameters in neural networks are under-utilized and show an alternative method for transfer learning without forgetting at a small parameter cost and (3) show some recent work on conditional computation, inspired by the psychophysical phenomena of visual priming in humans.
ThursdayMar 15, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 290C
Speaker:Elie Aidekon Title:Points of infinite multiplicity of a planar Brownian motionAbstract:opens in new windowin html    pdfopens in new window

Points of infinite multiplicity are particular points which the Brownian motion visits infinitely often. Following a work of Bass, Burdzy and Khoshnevisan, we construct and study a measure carried by these points. Joint work with Yueyun Hu and Zhan Shi.

ThursdayMar 15, 201812:15
Vision and AIRoom 1
Speaker:Aviad Levis Title:Three-Dimensional Scatterer TomographyAbstract:opens in new windowin html    pdfopens in new window

Scattering effects in images, including those related to haze, fog, and appearance of clouds, are fundamentally dictated by microphysical characteristics of the scatterers. We define and derive recovery of these characteristics, in a three-dimensional heterogeneous medium. Recovery is based on a novel tomography approach. Multiview (multi-angular) and multi-spectral data are linked to the underlying microphysics using 3D radiative transfer, accounting for multiple-scattering. Despite the nonlinearity of the tomography model, inversion is enabled using a few approximations that we describe. As a case study, we focus on passive remote sensing of the atmosphere, where scatterer retrieval can benefit modeling and forecasting of weather, climate, and pollution.

TuesdayMar 13, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 290C
Speaker:Giuliano GagliardiTitle:Smoothness of spherical varieties via toric degenerationsAbstract:opens in new windowin html    pdfopens in new window
Spherical varieties are a natural generalization of toric, symmetric, and flag varieties and form a rich class of algebraic varieties with an action of a reductive group. We combine the theory of toric degenerations of spherical varieties using representation theory with a recent result by Brown-McKernan-Svaldi-Zong, which characterises toric varieties using log pairs, in order to study the geometry of (horo-)spherical varieties. This is joint work in progress with Johannes Hofscheier.
ThursdayMar 08, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 290C
Speaker:Erwin Bolthausen Title:On the high temperature phase in mean-field spin glassesAbstract:opens in new windowin html    pdfopens in new window

We present a new way to derive the replica symmetric solution for the free energy in mean-field spin glasses. Only the Sherrington-Kirpatrick case has been worked out in details, but the method also works in other cases, for instance for the perceptron (work in progress), and probably also for the Hopfield net. The method is closely related to the TAP equations (for Thouless-Anderson-Palmer). It does not give any new results, presently, but it gives a new viewpoint, and it looks to be quite promising. As the TAP equations are widely discussed in the physics literature, also at low temperature, it is hoped that the method could be extended to this case, too. But this is open, and probably very difficult.

TuesdayFeb 27, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 290C
Speaker:Professor Amnon NeemanTitle:Approximability in derived categoriesAbstract:opens in new windowin html    pdfopens in new window

We will introduce the (new) notion of approximability in triangulated categories and show its power.

The brief summary is that the derived category of quasicoherent sheaves on a separated, quasicompact scheme is an approximable triangulated category.
As relatively easy corollaries one can: (1) prove an old conjecture of Bondal and Van den Bergh, about strong generation in D^{perf}(X), (2) generalize an old theorem of of Rouquier about strong generation in D^b_{coh}(X). Rouquier proved the result only in equal characteristic, we can extend to mixed characteristic, and (3) generalize a representability theorem of Bondal and Van den Bergh,from proper schemes of finite type over fields to proper schemes of finite type over any noetherian rings.

After stating these results and explaining what they mean, we will (time permitting) also mention structural theorems. It turns out that approximable triangulated categories have a fair bit of intrinsic, internal structure that comes for free.

TuesdayFeb 20, 201810:00
Special Guest SeminarRoom 1
Speaker:Prof. David SontagTitle:AI for Health Needs CausalityAbstract:opens in new windowin html    pdfopens in new window

Recent success stories of using machine learning for diagnosing skin cancer, diabetic retinopathy, pneumonia, and breast cancer may give the impression that artificial intelligence (AI) is on the cusp of radically changing all aspects of health care. However, many of the most important problems, such as predicting disease progression, personalizing treatment to the individual, drug discovery, and finding optimal treatment policies, all require a fundamentally different way of thinking. Specifically, these problems require a focus on *causality* rather than simply prediction. Motivated by these challenges, my lab has been developing several new approaches for causal inference from observational data. In this talk, I describe our recent work on the deep Markov model (Krishnan, Shalit, Sontag AAAI '17) and TARNet (Shalit, Johansson, Sontag, ICML '17).

TuesdayFeb 20, 201810:00
Special Guest SeminarRoom 1
Speaker:Prof. David SontagTitle:AI for Health Needs CausalityAbstract:opens in new windowin html    pdfopens in new window

Recent success stories of using machine learning for diagnosing skin cancer, diabetic retinopathy, pneumonia, and breast cancer may give the impression that artificial intelligence (AI) is on the cusp of radically changing all aspects of health care. However, many of the most important problems, such as predicting disease progression, personalizing treatment to the individual, drug discovery, and finding optimal treatment policies, all require a fundamentally different way of thinking. Specifically, these problems require a focus on *causality* rather than simply prediction. Motivated by these challenges, my lab has been developing several new approaches for causal inference from observational data. In this talk, I describe our recent work on the deep Markov model (Krishnan, Shalit, Sontag AAAI '17) and TARNet (Shalit, Johansson, Sontag, ICML '17).

TuesdayFeb 13, 201811:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Dmitry TuraevTitle:On Herman's positive metric entropy conjectureAbstract:opens in new windowin html    pdfopens in new window

We show that any area-preserving Cr-diffeomorphism of a two-dimensional surface displaying an elliptic fxed point can be Cr-perturbed to one exhibiting a chaotic island whose metric entropy is positive, for every 1 ≤ r≤ 1. This proves a conjecture of Herman stating that the identity map of the disk can be C-perturbed to a conservative di eomorphism with positive metric entropy. This implies also that the Chirikov standard map for large and small parameter values can be C- approximated by a conservative diffeomorphisms displaying a positive metric entropy (a weak version of Sinai's positive metric entropy conjecture). Finally, this sheds light onto a Herman's question on the density of Cr-conservative di eomorphisms displaying a positive metric entropy: we show the existence of a dense set formed by conservative diffeomorphisms which either are weakly stable (so, conjecturally, uniformly hyperbolic) or display a chaotic island of positive metric entropy.
This is a joint work with Pierre Berger.

TuesdayFeb 13, 201811:15
Algebraic Geometry and Representation Theory SeminarRoom 290C
Speaker:Yasmine Fittouhi Title:The intricate relationship between the Mumford system and the Jacobians of singular hyperelliptic curvesAbstract:opens in new windowin html    pdfopens in new window

The generalized Jacobian Jac_m(C ') of a smooth hyperelliptic curve C'  associated with a module m is an algebraic group that  can be described by using lines bundle of the curve C' or by using a  symmetric product  of the curve C' provided with a law of composition. This second definition of the Jacobian Jac_m(C') is directly related to the fibres of a  Mumford system. To be precise it is a subset of the compactified Jac_m(C') which is related to the fibres. This presentation will help us to demystify the relationship of these two mathematical objects.

MondayFeb 12, 201811:15
Mathematical Analysis and Applications SeminarRoom 1
Speaker:Alisha ZachariahTitle:Sub-Linear Channel Estimation and the Heisenberg group Abstract:opens in new windowin html    pdfopens in new window

Multiple wireless sensing tasks, e.g. radar detection for driver safety, involve estimating the "channel" or relationship between signal transmitted and received.

In this talk I will tell about the standard math model for the radar channel. In the case where the channel is sparse, I will demonstrate a channel estimation algorithm that is sub-linear in sampling and arithmetic complexity (and convince you of the need for such).

The main ingredients in the algorithm will be the use of an intrinsic algebraic structure known as the Heisenberg group and recent developments in the theory of the sparse Fast Fourier Transform (sFFT, due to Indyk et al.)

The talk will assume minimal background knowledge.

 

WednesdayFeb 07, 201811:00
The Chaim Leib Pekeris Memorial Lecture
Speaker:Professor Christos PapadimitriouTitle:A Computer Scientist Thinks about the BrainAbstract:opens in new windowin html    pdfopens in new windowDolfi and Lola Ebner Auditorium

Understanding the ways in which the Brain gives rise to the Mind (memory, behavior, cognition, intelligence, language) is the most challengingproblem facing science today. As the answer seems likely to be at least partly computational, computer scientists should work on this problem --- except there is no obvious place to start. I shall recount recent work (with W. Maass and S. Vempala) on a model for the formation and association of memories in humans, and reflect on why it may be a clue about language.

ThursdayFeb 01, 201812:15
Vision and AIRoom 1
Speaker:Justin Solomon Title:Geometric Optimization Algorithms for Variational ProblemsAbstract:opens in new windowin html    pdfopens in new window

 Variational problems in geometry, fabrication, learning, and related applications lead to structured numerical optimization problems for which generic algorithms exhibit inefficient or unstable performance.  Rather than viewing the particularities of these problems as barriers to scalability, in this talk we embrace them as added structure that can be leveraged to design large-scale and efficient techniques specialized to applications with geometrically structure variables.  We explore this theme through the lens of case studies in surface parameterization, optimal transport, and multi-objective design

MondayJan 29, 201814:30
Foundations of Computer Science ColloquiumRoom 290C
Speaker:Robert KrauthgamerTitle:Streaming Symmetric Norms via Measure ConcentrationAbstract:opens in new windowin html    pdfopens in new window

A long line of research studies the space complexity of estimating a norm l(x) in the data-stream model, i.e., when x is the frequency vector of an input stream consisting of insertions and deletions of items of n types. I will focus on norms l (in R^n) that are *symmetric*, meaning that l is invariant under sign-flips and coordinate-permutations, and show that the streaming space complexity is essentially determined by the measure-concentration characteristics of l. These characteristics are known to govern other phenomena in high-dimensional spaces, such as the critical dimension in Dvoretzky's Theorem. 

The family of symmetric norms contains several well-studied norms, such as all l_p norms, and indeed we provide a new explanation for the disparity in space complexity between p<=2 and p>2. We also obtain bounds for other norms that are useful in applications. 

Joint work with Jaroslaw Blasiok, Vladimir Braverman, Stephen R. Chestnut, and Lin F. Yang.

ThursdayJan 25, 201812:15
Vision and AIRoom 1
Speaker:Hallel Bunis Title:Caging Polygonal Objects Using Minimalistic Three-Finger HandsAbstract:opens in new windowin html    pdfopens in new window

Multi-finger caging offers a robust approach to object grasping. To securely grasp an object, the fingers are first placed in caging regions surrounding a desired immobilizing grasp. This prevents the object from escaping the hand, and allows for great position uncertainty of the fingers relative to the object. The hand is then closed until the desired immobilizing grasp is reached.

While efficient computation of two-finger caging grasps for polygonal objects is well developed, the computation of three-finger caging grasps has remained a challenging open problem. We will discuss the caging of polygonal objects using three-finger hands that maintain similar triangle finger formations during the grasping process. While the configuration space of such hands is four dimensional, their contact space which represents all two and three finger contacts along the grasped object's boundary forms a two-dimensional stratified manifold.

We will present a caging graph that can be constructed in the hand's relatively simple contact space. Starting from a desired immobilizing grasp of the object by a specific triangular finger formation, the caging graph is searched for the largest formation scale value that ensures a three-finger cage about the object. This value determines the caging regions, and if the formation scale is kept below this value, any finger placement within the caging regions will guarantee a robust object grasping.

TuesdayJan 23, 201816:15
Seminar in Geometry and TopologyRoom 290C
Speaker:Carl TiplerTitle:Quantization of extremal metrics and applicationsAbstract:opens in new windowin html    pdfopens in new window

An extremal metric, as defined by Calabi, is a canonical Kahler metric: it minimizes the curvature within a given Kahler class. According to the Yau-Tian-Donaldson conjecture, polarized Kahler manifolds admitting an extremal metric should correspond to stable manifolds in a Geometric Invariant Theory sense.
In this talk, we will explain that a projective extremal Kahler manifold is asymptotically relatively Chow stable. This fact was conjectured by Apostolov and Huang, and its proof relies on quantization techniques. We will explain various implications, such that unicity or splitting results for extremal metrics.
Joint work with Yuji Sano ( Fukuoka University). 

MondayJan 22, 201814:30
Foundations of Computer Science ColloquiumRoom 290C
Speaker:Ofer Grossman Title:Reproducibility in Randomized Log-spaceAbstract:opens in new windowin html    pdfopens in new window

A curious property of randomized log-space search algorithms is that their outputs are often longer than their workspace. One consequence is that there is no clear way to reproduce the same output when running the algorithm twice on the same input. It is not feasible to store the random bits (or the output) of the previous run in log-space, and using new random bits in another execution can result in a different output. This leads to the question: how can we reproduce the results of a randomized log space computation of a search problem?

We will give a precise definition of this notion of "reproducibility". Then we will show that every problem in search-RL has a randomized log-space algorithm where the output can be reproduced. Reproducibility can be thought of as an extension of pseudo-determinism. Indeed, for some problems in search-RL we show pseudo-deterministic algorithms whose running time significantly improve on known deterministic algorithms.

Joint work with Yang Liu.

ThursdayJan 18, 201813:30
Geometric Functional Analysis and Probability SeminarRoom 290C
Speaker:Oren Louidor (Technion) and Alexander Glazman (Tel Aviv).Title:Double SeminarAbstract:opens in new windowin html    pdfopens in new window

Oren Louidor (Technion)
Title: Dynamical freezing in a spin-glass with logarithmic correlations.
Abstract: We consider a continuous time random walk on the 2D torus, governed by the exponential of the discrete Gaussian free field acting as potential. This process can be viewed as Glauber dynamics for a spin-glass system with logarithmic correlations. Taking temperature to be below the freezing point, we then study this process both at pre-equilibrium and in-equilibrium time scales. In the former case, we show that the system exhibits aging and recover the arcsine law as asymptotics for a natural two point temporal correlation function. In the latter case, we show that the dynamics admits a functional scaling limit, with the limit given by a variant of Kolmogorov's K-process, driven by the limiting extremal process of the field, or alternatively, by a super-critical Liouville Brownian motion. Joint work with A. Cortines, J. Gold and A. Svejda.

Alexander Glazman (Tel Aviv)
Title: Level lines of a random Lipschitz function
Abstract: We consider the uniform distribution on Lipschitz functions on the triangular lattice, i.e. all integer-valued functions which differ by 0 or 1 on any two adjacent vertices. We show that with a positive probability such a function exhibits macroscopic level lines. Instead of working directly with Lipschitz functions we map this model to the loop $O(2)$ model with parameter $x=1$. The loop $O(n)$ model is a model for a random collection of non-intersecting loops on the hexagonal lattice, which is believed to be in the same universality class as the spin $O(n)$ model.
A main tool in the proof is a positive association (FKG) property that was recently shown to hold when $n \ge 1$ and $0<x\le\frac{1}{\sqrt{n}}$. Though the case $n=2$, $x=1$ is not in the FKG regime, it turns out that when loops are assigned one of two colours independently the marginal on loops of either of the colours does satisfy the FKG property. The colouring of loops allows to view the loop $O(2)$ model with $x=1$ as coupling of two percolation configurations. Studying each of them independently and using the XOR operation we establish existence of macroscopic loops, i.e. lev