## Select Seminar Series

All seminars- Home
- ›
- Studying at the Faculty
- ›
- Seminars ›
- Faculty Seminar

# Faculty Seminar

Randomness is an essential resource in computer science. In most applications perfect, and sometimes private, randomness is needed, while it is not even clear that such a resource exists. It is well known that the tools of classical computer science do not allow us to create perfect randomness from a single weak source of randomness - a fact that initiated the study of special functions called randomness extractors.

The first part of the talk will be focused on the performance of randomness extractors in the presence of a quantum adversary and present a new model for two-source extractors, termed the quantum Markov model.

In the second part of the talk I will describe a task called "randomness amplification and privatization", where a single and public Santha-Vazirani source of randomness is being transformed into uniform and private randomness - a cryptographic task that cannot be accomplished using the tools of classical computer science. I will then present a protocol that builds on the extractors discussed in the first part of the talk and explain the main ingredients of its security proof.

Zoom meeting : https://weizmann.zoom.us/j/99368552173

Parameterized Anaylsis leads both to deeper understanding of intractability results and to practical solutions for many NP-hard problems. Informally speaking, Parameterized Analysis is a mathematical paradigm to answer the following fundamental question: What makes an NP-hard problem hard? Specifically, how do different parameters (being formal quantifications of structure) of an NP-hard problem relate to its inherent difficulty? Can we exploit these relations algorithmically, and to which extent? Over the past three decades, Parameterized Analysis has grown to be a mature field of outstandingly broad scope with significant impact from both theoretical and practical perspectives on computation.

In this talk, I will first give a brief introduction of the field of Parameterized Analysis. Then, I will discuss some recent work in this field, where I mainly address (i) problems at the core of this field, rooted at Graph Minors and Graph Modification Problems, and (ii) applications of tools developed in (i) in particular, and of parameterized analysis in general, to Computational Geometry and Computational Social Choice. Additionally, I will zoom into a specific result, namely, the first single-exponential time parameterized algorithm for the Disjoint Paths problem on planar graphs. An efficient algorithm for the Disjoint Paths problem in general, and on "almost planar" graphs in particular, is a critical part in the quest for the establishment of an Efficient Graph Minors Theory. As the Graph Minors Theory is the origin of Parameterized Analysis and ubiquitous in the design of parameterized algorithms, making this theory efficient is a holy grail in the field.

Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. I will describe a series of works that develop a methodology for verifying distributed algorithms and systems using decidable logics, employing decomposition, abstraction, and user interaction. This methodology is implemented in an open-source tool, and has resulted in the first mechanized proofs of several important distributed protocols. I will also describe a novel approach to the problem of invariant inference based on a newly formalized duality between reachability and mathematical induction. The duality leads to a primal-dual search algorithm, and a prototype implementation already handles challenging examples that other state-of-the-art techniques cannot handle. I will briefly describe several other works, including a new optimization technique for deep learning computations that achieves significant speedups relative to existing deep learning frameworks.

The Higher Criticism (HC) test is a useful tool for detecting the presence of a signal spread across a vast number of features, especially in the sparse setting when only few features are useful while the rest are pure noise. We adapt the HC test to the two-sample setting of detecting changes between two frequency tables. We apply this adaptation to authorship attribution challenges, where the goal is to identify the author of a document using other documents whose authorship is known. The method is simple yet performs well without handcrafting and tuning. Furthermore, as an inherent side effect, the HC calculation identifies a subset of discriminating words, which allow additional interpretation of the results. Our examples include authorship in the Federalist Papers, machine-generated texts, and the identity of the creator of the Bitcoin.

We take two approaches to analyze the success of our method. First, we show that, in practice, the discriminating words identified by the test have low variance across documents belonging to a corpus of homogeneous authorship. We conclude that in testing a new document against the corpus of an author, HC is mostly affected by words characteristic of that author and is relatively unaffected by topic structure. Finally, we analyze the power of the test in discriminating two multinomial distributions under a sparse and weak perturbation model. We show that our test has maximal power in a wide range of the model parameters, even though these parameters are unknown to the user.

Modern machine learning algorithms have achieved remarkable performance in a myriad of applications, and are increasingly used to make impactful decisions in the hiring process, criminal sentencing, healthcare diagnostics and even to make new scientific discoveries. The use of data-driven algorithms in high-stakes applications is exciting yet alarming: these methods are extremely complex, often brittle, notoriously hard to analyze and interpret. Naturally, concerns have raised about the reliability, fairness, and reproducibility of the output of such algorithms. This talk introduces statistical tools that can be wrapped around any "black-box" algorithm to provide valid inferential results while taking advantage of their impressive performance. We present novel developments in conformal prediction and quantile regression, which rigorously guarantee the reliability of complex predictive models, and show how these methodologies can be used to treat individuals equitably. Next, we focus on reproducibility and introduce an operational selective inference tool that builds upon the knockoff framework and leverages recent progress in deep generative models. This methodology allows for reliable identification of a subset of important features that is likely to explain a phenomenon under-study in a challenging setting where the data distribution is unknown, e.g., mutations that are truly linked to changes in drug resistance.

Numerous researchers recently applied empirical spectral analysis to the study of modern deep learning classifiers, observing spectral outliers and small but distinct bumps often seen beyond the edge of a "main bulk". This talk presents an important formal class/cross-class structure and shows how it lies at the origin of these visually striking patterns. The structure is shown to permeate the spectra of deepnet features, backpropagated errors, gradients, weights, Fisher Information matrix and Hessian, whether these are considered in the context of an individual layer or the concatenation of them all. The significance of the structure is illustrated by (i) demonstrating empirically that the feature class means separate gradually from the bulk as function of depth and become increasingly more orthogonal, (ii) proposing a correction to KFAC, a well known second-order optimization algorithm for training deepnets, and (ii) proving in the context of multinomial logistic regression that the ratio of outliers to bulk in the spectrum of the Fisher information matrix is predictive of misclassification.

The rediscovery of the Archimedes Palimpsest in 1998 was a unique event: one of the most important authors of antiquity was read afresh, with many transformations made in the established text. Twenty years later, what have we learned from the Archimedes Palimpsest? What changed in our picture of the ancient history of mathematics?

Reviel Netz is Pat Suppes Professor of Greek Mathematics and Astronomy at Stanford University. Besides editing the Archimedes Palimpsest, he has published many books and articles on the history of Greek mathematics and on other subjects. His most recent book, "Scale, Space and Canon in Ancient Literary Culture", is now forthcoming from Cambridge University Press.

Refreshments will be served at 11:00am.

The network line rate is constantly on the rise to support the exploding amounts of data. This means that we have less time to process individual packets, despite a growing demand for better network telemetry. Moreover, CPU speeds are not rising at the same rate as we near the end of Moore's law, making it harder to rely on software computations. Programmable hardware switches are an emerging technology that enables flexible packet processing while optimizing for throughput and latency.

In this talk, I will introduce algorithms that leverage programmable switches for accelerating database operations and for improving network telemetry. Switches are orders of magnitude more efficient than traditional hardware accelerators, exist in the datapath, and are ideal for computation offloading. For telemetry, we will discuss how switches can probabilistically encode information across multiple packets to provide fine-grained network visibility with minimal overheads.

The past decade has witnessed the emergence of single-cell technologies that measure the expression level of genes at a single-cell resolution. These developments have revolutionized our understanding of the rich heterogeneity, structure, and dynamics of cellular populations, by probing the states of millions of cells, and their change under different conditions or over time. However, in standard experiments, information about the spatial context of cells, along with additional layers of information they encode about their location along dynamic processes (e.g. cell cycle or differentiation trajectories), is either lost or not explicitly accessible. This poses a fundamental problem for elucidating collective tissue function and mechanisms of cell-to-cell communication.

In this talk I will present computational approaches for addressing these challenges, by learning interpretable representations of structure, context and design principles for multicellular systems from single-cell information. I will first describe how the locations of cells in their tissue of origin and the resulting spatial gene expression can be probabilistically inferred from single-cell information by a generalized optimal-transport optimization framework, that can flexibly incorporate prior biological assumptions or knowledge derived from experiments. Inference in this case is based on an organization principle for spatial gene expression, namely a structural correspondence between distances of cells in expression and physical space, which we hypothesized and supported for different tissues. We used this framework to spatially reconstruct diverse tissues and organisms, including the fly embryo, mammalian intestinal epithelium and cerebellum, and further inferred spatially informative genes. Since cells encode multiple layers of information, in addition to their spatial context, I will also discuss several approaches for the disentanglement of single-cell gene expression into distinct biological processes, based on ideas rooted in random matrix theory and manifold learning. I will finally discuss how these results can be generalized to reveal principles underlying self-organization of cells into multicellular structures, setting the foundation for the computationally-directed design of cell-to-cell interactions optimized for specific tissue structure or function.

The ideal boundary of a negatively curved manifold naturally carries two types of measures.

On the one hand, we have conditionals for equilibrium (Gibbs) states associated to Hoelder potentials; these include the Patterson-Sullivan measure and the Liouville measure. On the other hand, we have stationary measures coming from random walks on the fundamental group.

We compare and contrast these two classes.First, we show that both of these of these measures can be associated to geodesic flow invariant measures on the unit tangent bundle, with respect to which closed geodesics satisfy different equidistribution properties. Second, we show that the absolute continuity between a harmonic measure and a Gibbs measure is equivalent to a relation between entropy, (generalized) drift and critical exponent, generalizing previous formulas of Guivarc'h, Ledrappier, and Blachere-Haissinsky-Mathieu. This shows that if the manifold (or more generally, a CAT(-1) quotient) is geometrically finite but not convex cocompact, stationary measures are always singular with respect to Gibbs measures.

A major technical tool is a generalization of a deviation inequality due to Ancona saying the so called Green distance associated to the random walk is nearly additive along geodesics in the universal cover.

Part of this is based on joint work with Gerasimov-Potyagailo-Yang and part on joint work with Tiozzo.

Benjamini-Schramm convergence is a probabilistic notion useful in studying the asymptotic behavior of sequences of metric spaces. The goal of this talk is to discuss this notion and some of its applications from various perspectives, e.g. for groups, graphs, hyperbolic manifolds and locally symmetric spaces, emphasizing the distinction between the hyperbolic rank-one case and the rigid high-rank case. Understanding the "sofic" part of the Benjamini-Schramm space, i.e. all limit points of "finitary" objects, will play an important role. From the group-theoretic perspective, I will talk about sofic groups, i.e. groups which admit a probabilistic finitary approximation, as well as a companion notion of permutation stability. Several results and open problems will also be discussed.

The broad range of demands which next generation communications systems are required to meet, spanning from increased rates to efficient power consumption, cannot be satisfied by conventional architectures. These requirements give rise to a multitude of new challenges in modern communications and signal processing systems. In this talk we focus on two fundamental aspects of digital communications receivers - signal acquisition and symbol detection - discussing methods to improve their performance and reliability. For signal acquisition, we show how analog-to-digital convertors can be designed in light of the overall system task, achieving optimal-approaching performance while utilizing efficient bit-constrained samplers. Then, we discuss how recent advances in machine learning can be exploited for symbol detection, a task which is typically addressed using channel-model-based methods, such as the Viterbi algorithm and interference cancellation methods. We present a new framework for combining artificial intelligence and model-based algorithms, allowing the latter to be implemented in a data-driven manner. The resulting architectures are capable of approaching the established performance of model-based algorithms, while being invariant of the underlying statistical model, and learning it from a small amount of training samples.

Automated agents and humans increasingly interact and collaborate: at home, at the workplace, on the road, and in many other everyday settings. In all these settings, effectively recognizing what users try to achieve, providing relevant assistance (or, depending on the application, taking relevant preventive measures), and supporting an effective collaboration in the system are essential tasks. All these tasks can be enhanced via efficient system redesign, and often even subtle changes can yield great benefits. However, since these systems are typically complex, hand crafting good design solutions is hard.

Utility Maximizing Design (UMD) addresses this challenge by automating the design process. It does so by offering informed search strategies to automatically and efficiently find optimal design solutions for maximizing a variety of targeted objectives. One example is Goal Recognition Design (GRD), which seeks a redesign to an environment that minimizes the maximal progress an agent can make before its goal is revealed. A second is Equi-Reward Utility Maximizing Design (ER-UMD), which seeks to maximize the performance of a planning agent in a stochastic environment. The third, Reinforcement Learning Design (RLD), parallels ER-UMD, but considers an environment with a reinforcement learning agent.

Among the different ways that may be available to change a setting, the talk will focus on information shaping, which consists of selectively revealing information to a partially informed agent in order to maximize its performance. As an example application, I will demonstrate how information shaping can be applied to enhance the performance of a robot that is navigating in an unfamiliar environment. The talk will conclude with a discussion of future extensions and various applications related to the agenda of Utility Maximizing Design, including human-robot collaboration, intrusion detection, and assisted cognition.