You are here


MondaySep 18, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Igor Shinkar Title:Worst-Case to Average-Case Reductions via Additive CombinatoricsAbstract:opens in new windowin html    pdfopens in new window
In this talk I will present a framework for designing worst-case to average-case reductions. Focusing on the problem of Matrix Multiplication, I will describe a transformation that takes any weak algorithm that is only correct on a small fraction of the inputs, and converts it into an algorithm that is correct on all inputs, while paying only a small overhead in running time. The talk is based on joint work with Vahid Asadi, Sasha Golovnev, Tom Gur, and Sathyawageeswar Subramanian.
TuesdaySep 05, 202311:15
Foundations of Computer Science SeminarRoom 1
Speaker:Avi WigdersonTitle:Imitation GamesAbstract:opens in new windowin html    pdfopens in new window
One of Alan Turing's most influential papers is his 1950 Computing machinery and intelligence, in which he introduces the famous "Turing test" for probing the nature of intelligence by evaluating the abilities of machines to behave as humans. In this test, which he calls the "Imitation Game," a (human) referee has to distinguish between two (remote and separate) entities, a human and a computer, only by observing answers to a sequence of arbitrary questions to each entity. This lecture will exposit, through examples from a surprisingly diverse array of settings, the remarkable power of this basic idea to understand many other concepts. I will discuss variations of the Imitation Game in which we change the nature of the referee, and of the objects to be distinguished, to yield different analogs of the Turing test. These new Imitation Games lead to novel, precise, and operative definitions of classical notions, including secret, knowledge, privacy, randomness, proof, fairness, and others. These definitions have in turn led to numerous results, applications, and understanding. Some, among many consequences of this fundamental paradigm, are the foundations of cryptography, the surprising discoveries on the power and limits of randomness, the recent influential notion of differential privacy, and breakthrough results on patterns in the prime numbers and navigation in networks. Central to each of these settings are computational and information theoretic limitations placed on the referee in the relevant Imitation Game. This lecture will survey some of these developments. It assumes no specific background knowledge.
MondayJul 03, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Alex LombardiTitle:The latest on SNARGsAbstract:opens in new windowin html    pdfopens in new window
Succinct non-interactive arguments (SNARGs) are a powerful cryptographic primitive whose feasibility is still poorly understood. However, over the last few years, a successful paradigm for building SNARGs from standard cryptographic assumptions has emerged: - First, build a non-interactive *batch* argument system (BARG) for NP. - Then, use BARGs for NP to build SNARGs for various NP languages of interest. In this talk, we will discuss recent progress on constructing SNARGs within this paradigm. Specifically, we study: 1) Under what computational assumptions can we build BARGs for NP? 2) For which NP languages can we build SNARGs within this paradigm? This talk is based on joint works with Zvika Brakerski, Maya Brodsky, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan, and Daniel Wichs.
MondayJun 26, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Eylon Yogev Title:IOPs with Inverse Polynomial Soundness ErrorAbstract:opens in new windowin html    pdfopens in new window
We show that every language in NP has an Interactive Oracle Proof (IOP) with inverse polynomial soundness error and small query complexity. This achieves parameters that surpass all previously known PCPs and IOPs. Specifically, we construct an IOP with perfect completeness, soundness error 1/n, round complexity O(loglog n), proof length poly(n) over an alphabet of size O(n), and query complexity O(loglog n). This is a step forward in the quest to establish the sliding-scale conjecture for IOPs (which would additionally require query complexity O(1)). Our main technical contribution is a \emph{high-soundness small-query} proximity test for the Reed--Solomon code. We construct an IOP of proximity for Reed--Solomon codes, over a field F with evaluation domain L and degree d, with perfect completeness, soundness error (roughly) max{1-\delta , O(\rho^{1/4})}$ for \delta-far functions, round complexity O(loglog d), proof length O(|L|/\rho) over F, and query complexity O(loglog d); here \rho = (d+1)/|L| is the code rate. En route, we obtain a new high-soundness proximity test for bivariate Reed--Muller codes. The IOP for NP is then obtained via a high-soundness reduction from NP to Reed--Solomon proximity testing with rate \rho = 1/poly(n) and distance \delta = 1-1/poly(n) (and applying our proximity test). Our constructions are direct and efficient, and hold the potential for practical realizations that would improve the state-of-the-art in real-world applications of IOPs. Joint work with Gal Arnon and Alessandro Chiesa
MondayJun 19, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Juan PerdomoTitle:The Value of Individual Risk Prediction in Wisconsin Public SchoolsAbstract:opens in new windowin html    pdfopens in new window
Early warning systems are a class of risk prediction tools that have recently become part of the de facto approach towards improving high school graduation rates in the United States. These systems aim to help schools efficiently target resources to students by predicting which individuals are least likely to graduate, and hence need the most help. In this talk, I will present the results of a collaboration with the Wisconsin Department of Public Instruction in which we conducted the first large-scale evaluation of the long-term impacts of early warning systems on high school graduation rates. Using a decade's worth of data and models, we find that risk assessments made by the system have been highly accurate at predicting student dropout, yet ineffective in improving outcomes. We will see how both of these findings can be simultaneously explained by the influence of structural, social factors. We will close with broader discussion regarding the broader policy implications of our work.
MondayJun 12, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Shay Moran Title:A Combinatorial Characterization of Minimax in 0/1 GamesAbstract:opens in new windowin html    pdfopens in new window
We will discuss a generalization of the celebrated Minimax Theorem (von Neumann, 1928) for binary zero-sum games. A simple game which fails to satisfy Minimax is Ephraim Kishon's "Jewish Poker" (see [1,2] below). In this game, each player picks a number and the larger number wins. The payoff matrix in this game is *infinite triangular*. We show this is the only obstruction: if a game does not contain triangular submatrices of unbounded sizes then the Minimax Theorem holds. This generalizes von Neumann's Minimax Theorem by removing requirements of finiteness or compactness. [1] (english) [2]מקבץ%20יצירות%20%20-%20אפרים%20… (hebrew, third story)
MondayMay 29, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Yuval Rabani Title:The Randomized k-Server Conjecture is False!Abstract:opens in new windowin html    pdfopens in new window

We prove a few new lower bounds on the randomized competitive ratio for the k-server problem and other related problems, resolving some long-standing conjectures. In particular, for metrical task systems (MTS) we asympotically settle the competitive ratio and obtain the first improvement to an existential lower bound since the introduction of the model 35 years ago (in 1987).
More concretely, we show:
1. There exist (k+1)-point metric spaces in which the randomized competitive ratio for the k-server problem is Omega(log^2 k). This refutes the folklore conjecture (which is known to hold in some families of metrics) that in all metric spaces with at least k+1 points, the competitive ratio is \Theta(logk).
2. Consequently, there exist n-point metric spaces in which the randomized competitive ratio for MTS is Omega(log^2 n). This matches the upper bound that holds for all metrics. The previously best existential lower bound was Omega(logn) (which was known to be tight for some families of metrics).
3. For all k< n \in \mathbb{N}, for *all* n-point metric spaces the randomized k-server competitive ratio is at least Omega(logk), and consequently the randomized MTS competitive ratio is at least Omega(logn). These universal lower bounds are asymptotically tight. The previous bounds were Omega(logk/loglogk) and Omega(logn/loglogn), respectively.
4. The randomized competitive ratio for the w-set metrical service systems problem, and its equivalent width-w layered graph traversal problem, is Omega(w^2). This slightly improves the previous lower bound and matches the recently discovered upper bound.
5. Our results imply improved lower bounds for other problems like k-taxi, distributed paging and metric allocation.
These lower bounds share a common thread, and other than the third bound, also a common construction.
 Joint work with Sebastien Bubeck and Christian Coester.


MondayMay 15, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Tomasz Kociumaka Title:Bounded Weighted Edit DistanceAbstract:opens in new windowin html    pdfopens in new window

The edit distance, also known as Levenshtein distance, is the minimum number of character insertions, deletions, and substitutions needed to transform one string into another. The textbook dynamic programming algorithm computes the edit distance of two length-n strings in O(n²) time, which is optimal up to sub-polynomial factors and conditioned on the Strong Exponential Time Hypothesis (SETH). In a bounded setting, where the running time is parameterized by the edit distance k, the celebrated algorithm by Landau and Vishkin (JCSS’88) achieves a running time of O(n+k²), which is optimal as a function of n and k (again, up to sub-polynomial factors and conditioned on SETH). While the theory community thoroughly studied the Levenshtein distance, most practical applications rely on a more general weighted edit distance, where each edit has a weight depending on its type and the involved characters from the alphabet Σ. This is formalized through a weight function w : Σ{ε} × Σ{ε} → normalized so that w(a,a) = 0 for aΣ{ε} and w(a,b) ≥ 1 for a,bΣ{ε} with a ≠ b. The classic O(n²)-time algorithm supports this setting seamlessly, but for many decades only a straightforward O(nk)-time solution was known for the bounded version of the weighted edit distance problem. In this talk, I will present an O(n+k⁵)-time algorithm (joint work with Das, Gilbert, Hajiaghayi, and Saha; STOC'23; arXiv:2302.04229) and a very recent Õ(n+√{nk³})-time algorithm (joint work with Cassis and Wellnitz). I will also sketch a lower bound that proves the optimality of the latter algorithm for √n ≤ k ≤ n (up to sub-polynomial factors and conditioned on the All-Pairs Shortest Paths Hypothesis).

MondayMay 08, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Chinmay Nirkhe Title:A distribution testing oracle separation between QMA and QCMAAbstract:opens in new windowin html    pdfopens in new window
It is long-standing open question in quantum complexity theory whether the definition of non-deterministic quantum computation requires quantum witnesses (QMA) or if classical witnesses suffice (QCMA). We make progress on this question by constructing a randomized classical oracle separating the respective computational complexity classes. Previous separations [Aaronson-Kuperberg (CCC'07), Fefferman-Kimmel (MFCS'18)] required a quantum unitary oracle. The separating problem is deciding whether a distribution supported on regular un-directed graphs either consists of multiple connected components (yes instance) or consists of one expanding connected component (no instances) where the graph is given in an adjacency-list format by the oracle. Therefore, the oracle is a distribution over n-bit boolean functions.
MondayMar 13, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Greg Bodwin Title:Recent Progress on Fault Tolerant SpannersAbstract:opens in new windowin html    pdfopens in new window
Given a large input graph, a k-spanner is a sparse subgraph that preserves the shortest path distances of the original within an approximation factor of k. When this distance approximation is robust to f failing nodes or edges, the spanner is f-fault tolerant. Fault tolerant spanners and their relatives arise commonly in networking and distributed computing. There has been a recent flurry of progress on fault tolerant spanners and their relatives, including faster construction algorithms and better tradeoffs between spanner size, error, and level of fault tolerance. We will survey this progress, spanning a sequence of 7 papers over the last 5 years. We will explain the new techniques that have enabled progress, the problems that have been solved, and the problems that remain open.
TuesdayFeb 21, 202314:15
Foundations of Computer Science SeminarRoom 1
Speaker:Mikkel Thorup Title:Reconstructing the Tree of Life (Fitting Distances by Tree Metrics)Abstract:opens in new windowin html    pdfopens in new window

We consider the numerical taxonomy problem of fitting an SxS distance matrix D with a tree metric T. Here T is a weighted tree spanning S where the path lengths in T induce a metric on S. If there is a tree metric matching D exactly, then it is easily found. If there is no exact match, then for some k, we want to minimize the L_k norm of the errors, that is, pick T so as to minimize ||D-T||_k = (Sum_{i,j in S} |D(i,j)-T(i,j)|^k) ^{1/k}.

This problem was raised in biology in the 1960s for k = 1,2. The biological interpretation is that T represents a possible evolution behind the species in S matching some measured distances in D. Sometimes, it is required that T is an ultrametric, meaning that all species are at the same distance from the root.

An evolutionary tree induces a hierarchical classification of species and this is not just tied to biology. Medicine, ecology and linguistics are just some of the fields where this concept appears, and it is even an integral part of machine learning and data science. Fundamentally, if we can approximate distances with a tree, then they are much easier to reason about: many questions that are NP-hard for general metrics can be answered in linear time on tree metrics. In fact, humans have appreciated hierarchical classifications at least since Plato and Aristotle (350 BC).

The numerical taxonomy problem is important in practice and many heuristics have been proposed. In this talk we will review the basic algorithmic theory, results and techniques, for the problem, including the most recent result from FOCS'21 [Vincent Cohen-Addad et al., 2021]. They paint a varied landscape with big differences between different moments, and with some very nice open problems remaining.

- At STOC'93, Farach, Kannan, and Warnow [Martin Farach et al., 1995] proved that under L_\infty, we can find the optimal ultrametric. Almost all other variants of the problem are APX-hard.

- At SODA'96, Agarwala, Bafna, Farach, Paterson, and Thorup [Richa Agarwala et al., 1999] showed that for any norm L_k, k >= 1, if the best ultrametric can be \alpha-approximated, then the best tree metric can be 3\alpha-approximated. In particular, this implied a 3-approximation for tree metrics under L_\infty.

- At FOCS'05, Ailon and Charikar [Nir Ailon and Moses Charikar, 2011] showed that for any L_k, k >= 1, we can get an approximation factor of O(((log n)(log log n))^{1/k}) for both tree and ultrametrics. Their paper was focused on the L_1 norm, and they wrote "Determining whether an O(1) approximation can be obtained is a fascinating question".

- At FOCS'21, Cohen-Addad, Das, Kipouridis, Parotsidis, and Thorup [Vincent Cohen-Addad et al., 2021] showed that indeed a constant factor is possible for L_1 for both tree and ultrametrics. This uses the special structure of L_1 in relation to hierarchies.

- The status of L_k is wide open for 1 < k < \infty. All we know is that the approximation factor is between Omega(1) and O((log n)(log log n)).

MondayJan 30, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Seth PettieTitle:Algorithms Should Have Bullshit Detectors! (or Polynomial Time Byzantine Agreement with Optimal Resilience)Abstract:opens in new windowin html    pdfopens in new window

One thing that distinguishes (theoretical) computer science from other scientific disciplines is its full-throated support of a fundamentally adversarial view of the universe.  Malicious adversaries, with unbounded computational advantages, attempt to foil our algorithms at every turn and destroy their quantitative guarantees.  However, there is one strange exception to this world view and it is this: the algorithm must accept its input as sacrosanct, and may never simply reject its input as illegitimate.  But what if some inputs really are illegitimate?  Is building a "bullshit detector" for algorithms a good idea?

To illustrate the power of the Bullshit Detection worldview, we give the first polynomial-time protocol for Byzantine Agreement that is resilient to f<n/3 corruptions against an omniscient, computationally unbounded adversary in an asynchronous message-passing model. (This is the first improvement to Ben-Or and Bracha's exponential time protocols from the 1980s that are resilient to f<n/3 corruptions.) The key problem is to design a coin-flipping protocol in which corrupted parties (who chose the outcomes of their coins maliciously) are eventually detected via statistical tests.
We will also discuss other algorithmic contexts in which bullshit detection might be useful.

MondayJan 16, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Omri Ben-Eliezer Title:Robust Streaming: Where are we headed?Abstract:opens in new windowin html    pdfopens in new window
The classical literature on streaming algorithms has mainly studied two types of algorithms: randomized and deterministic. However, almost all classical analyses of randomized streaming algorithms assume that the stream is "fixed in advance", making them unfit for use in adaptive settings where future stream updates depend on previous outputs of the algorithm. Meanwhile, deterministic algorithms are guaranteed to work in adaptive settings, but many important problems in the streaming literature do not admit efficient deterministic algorithms. This raises the question of whether one can enjoy both worlds: do there exist robust randomized streaming algorithms, which are space-efficient and provably work in adaptive settings? The recent couple of years have seen a surge of work on this topic, starting from a generic robustification framework we developed, which turns "standard" randomized algorithms into robust ones. As it turns out, the answer to the above question is largely positive for insertion-only streams, but still unknown in general turnstile (insertion-deletion) streams. I will present our framework and mention several lines of follow-up work on this topic, including improved frameworks, results for specific algorithms, and connections to a wide range of topics within computer science, including differential privacy, cryptography, learning theory and others. Focusing on classical problems such as distinct elements counting and norm estimation, I will highlight what we know in the turnstile setting and present several directions for future work. Based in part on joint works with Rajesh Jayaram, David Woodruff, and Eylon Yogev, and with Talya Eden and Krzysztof Onak. (I will also briefly mention related joint works with Noga Alon, Yuval Dagan, Shay Moran, Moni Naor, and Eylon Yogev.)
SundayJan 15, 202311:15
Foundations of Computer Science SeminarRoom 155
Speaker:Fernando Granha JeronimoTitle:Expansion, Codes and Optimization at the Frontiers of TCSAbstract:opens in new windowin html    pdfopens in new window***PLEASE NOTE THE UNUSUAL DAY AND HOUR***
Expansion is a manifestation of (pseudo)randomness with expander graphs and their higher-dimensional analogues being concrete examples. Coding theory is the study of properties and algorithms of codes, which are, in particular, important for protecting data and communication against errors. Optimization, especially convex programs, underlies much of our understanding of efficient computation. Not only these are fundamental areas in their own right, but they also enjoy great synergy with mutually nourishing interactions. This synergy has been instrumental in advancing our understanding at the frontiers of almost optimal codes and expansion. Expansion is the key ingredient in the breakthrough construction of explicit almost optimal binary codes of Ta-Shma'17 (approximately matching the random parameters known since the 1950s!). Using optimization and exploring expansion properties, we give efficient decoding algorithms for these codes. Our first decoder is based on the Sum-of-Squares SDP hierarchy and builds on our earlier approximation algorithm for constraint satisfaction problems (CSPs) on high-dimensional expanders (HDXs). It gives the first polynomial time decoder for explicit almost optimal binary codes. Our second decoding algorithm is based on generalizations of weak regularity to expanding hypergraphs, and it runs in near-linear time. Regarding almost optimal expansion, we show how to efficiently transform an arbitrary bounded degree expander into an almost optimal (i.e., almost Ramanujan) one in a way that preserves its structural properties thereby having many applications. In particular, we obtain almost optimal Cayley expanders from any expanding group. This is done by generalizing the breakthrough techniques of Ta-Shma'17 from scalar to operators, and it is an example of coding theory nourishing back the study of expansion. Surprisingly, more precise parameters defining optimal binary codes are not known! This longstanding question is intimately connected to optimization from which the best bounds were derived in the 1970s. We now have a provably complete hierarchy of linear programs for the important class of linear codes. This gives a provably sufficient avenue to make progress on this problem. In this talk, I will give an account of these developments and these synergetic interactions. I will also point to some future research directions. Expansion, codes and optimization are vibrant areas of TCS with several intriguing questions of their own, many interactions and great potential for even further interactions with other areas. (This presentation is based on joint work with many collaborators. Please, see my webpage for details.)
MondayDec 19, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Or Zamir Title:Algorithmic Applications of Hypergraph and Partition ContainersAbstract:opens in new windowin html    pdfopens in new window
We present a general method to convert algorithms into faster algorithms for almost-regular input instances. Informally, an almost-regular input is an input in which the maximum degree is larger than the average degree by at most a constant factor. This family of inputs vastly generalizes several families of inputs for which we commonly have improved algorithms, including bounded-degree inputs and random inputs. It also generalizes families of inputs for which we don't usually have faster algorithms, including regular-inputs of arbitrarily high degree and all very dense inputs. We apply our method to achieve breakthroughs in exact algorithms for several central NP-Complete problems including k-SAT, Graph Coloring, and Maximum Independent Set. Our main tool is the first algorithmic application of the relatively new Hypergraph Container Method (Saxton and Thomason 2015, Balogh, Morris and Samotij 2015). This recent breakthrough, which generalizes an earlier version for graphs (Kleitman and Winston 1982, Sapozhenko 2001), has been used extensively in recent years in extremal combinatorics. An important component of our work is a new generalization of (hyper-)graph containers to Partition Containers.
MondayDec 05, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Amos KormanTitle:An Algorithmic Perspective to Collective BehaviorAbstract:opens in new windowin html    pdfopens in new window
In this talk, I will present a new interdisciplinary approach that I have been developing in recent years, aiming to build a bridge between the fields of algorithm theory and collective (animal) behavior. Ideally, an algorithmic perspective on biological phenomena can provide a level of fundamental understanding that is difficult to achieve using typical computational tools employed in this area of research (e.g., differential equations or computer simulations). In turn, this fundamental understanding can provide both qualitative and quantitative predictions that can guide biological research in unconventional directions. I will demonstrate this novel approach by presenting a sequence of works on collective ant navigation, whose experimental part was done in collaboration with the Feinerman ant lab at the Weizmann Institute. In the second part of the talk, I will present a recent result (published in Science Advances 2021) regarding the search efficiency of common animal movement patterns, addressing a long-standing open problem in the area of foraging. I will conclude the talk by discussing potential avenues to employ an algorithmic perspective in biological contexts.
MondayNov 14, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Shachar Lovett Title:The sunflower conjecture and its connections to computational complexityAbstract:opens in new windowin html    pdfopens in new window
The sunflower conjecture is a famous open problem in combinatorics, asked by Erdos and Rado in 1960. A sunflower is a collection of sets with the same pairwise intersection pattern. The conjecture asks if given a large enough family of sets, it must contain a sunflower. Erdos and Rado proved that this is indeed true, but they conjectured that the quantitative bounds that they showed can be improved. In the last 60 years, many applications of sunflowers were found (both in math and in CS), but the original bounds were not significantly improved. Two years ago, we were able to significantly improve the quantitative bounds, getting much closer to the conjectured bound. Perhaps surprisingly, our new techniques draw inspiration from computational complexity, concretely from the Hastad switching lemma. The switching lemma states that any DNF formula simplifies if most of its inputs are fixed to random values. We show that if the DNF is pseudo-random, then in fact it suffices to restrict only a small fraction of the inputs. I will explain this result and how it is connected to the study of the sunflower conjecture. No background knowledge is assumed. Based on joint work with Ryan Alweiss, Kewen Wu and Jiapeng Zhang.
ThursdayNov 10, 202214:30
Foundations of Computer Science SeminarRoom 1
Speaker:Kasper Green Larsen Title:Optimal Weak to Strong LearningAbstract:opens in new windowin html    pdfopens in new window
The classic algorithm AdaBoost allows to convert a weak learner, that is an algorithm that produces a hypothesis which is slightly better than chance, into a strong learner, achieving arbitrarily high accuracy when given enough training data. We present a new algorithm that constructs a strong learner from a weak learner but uses less training data than AdaBoost and all other weak to strong learners to achieve the same generalization bounds. A sample complexity lower bound shows that our new algorithm uses the minimum possible amount of training data and is thus optimal. Hence, this work settles the sample complexity of the classic problem of constructing a strong learner from a weak learner. This work was accepted at NeurIPS'22 and is joint work with Martin Ritzert from Aarhus University.
MondayOct 31, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:James Orlin Title:All pairs shortest paths in O(nm) timeAbstract:opens in new windowin html    pdfopens in new window
We pesent an O(nm) time algorithm for all-pairs shortest paths computations in a directed graph with n nodes, m arcs, and nonnegative integer arc costs. This matches the complexity bound attained by Thorup for the all-pairs problems in undirected graphs. Our main insight is that shortest paths problems with approximately balanced directed cost functions can be solved similarly to the undirected case. Our algorithm starts with a preprocessing step that transforms the cost vector into a reduced cost vector that is approximately balanced. Using these reduced costs, every shortest path query can be solved in O(m) time using an adaptation of Thorup’s component hierarchy method. The balancing result is of independent interest, and gives the best currently known approximate balancing algorithm for the problem. This research is joint with Laszlo Vegh.
MondayOct 03, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Edward Hirsch Title:(Semi)algebraic proof complexityAbstract:opens in new windowin html    pdfopens in new window
A propositional proof system is roughly a polynomial-time procedure that checks certificates (proofs) for propositional tautologies (or another co-NP-complete language); thus, the existence of a proof system possessing polynomial-size proofs is equivalent to NP=co-NP. "Steve Cook's program" suggests considering more and more powerful specific proof systems and proving exponential lower bounds for them. An algebraic proof system proves that a system of polynomials f_1,...,f_m has no common roots. For example, Hilbert's Nullstellensatz provides a certificate consisting of polynomials g_1,...,g_m such that \sum_i f_i g_i = 1. A semialgebraic proof system can provide proofs of the infeasibility for a system of polynomial inequalities. Such proofs have been brought to proof complexity in the late 90s by Pavel Pudlak, who suggested a proof system based on the Lovasz--Schrijver optimization procedure, and by Dima Grigoriev and Nicolai Vorobjev, who suggested to use the Positivstellensatz theorem. After a short introduction to the field, I will show how close we are to converting semialgebraic proofs into algebraic proofs and prove an exponential lower bound conditioned on the Shub and Smale conjecture about computing n!. I will also speak on the latest developments. Main reference: Yaroslav Alekseev, Dima Grigoriev, Edward~A. Hirsch, and Iddo Tzameret. Semi-algebraic proofs, IPS lower bounds, and the \tau-conjecture: can a natural number be negative? Proceedings of STOC'20.
MondayAug 15, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Alexander LubotzkyTitle:Stability and testability of permutations' equationsAbstract:opens in new windowin html    pdfopens in new window
We study a new direction of research in "property testing": Permutation equations. Let A and B be two permutations in Sym(n) that "almost commute"- are they a small deformation of permutations that truly commute? More generally, if R is a system of word-equations in variables X=x_1,....,x_d and A_1,....,A_d are permutations which are almost a solution; are they near true solutions? It turns out that the answer to this question depends only on the group presented by the generators X and relations R. This leads to the notions of "stable groups" and "testable groups" and calls for some group theoretic methods for answering these questions. We will present a few results and methods which were developed in recent years to check whether a group is stable\testable ( e.g., using IRS's= invariant random subgroups). We will also describe the connection of this subject with locally testable codes as well as with the long-standing problem of whether every group is sofic.
MondayJul 18, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Omri Weinstein Title:How Many Linear Equations are Required to Solve a Linear Program? Abstract:opens in new windowin html    pdfopens in new window

Linear Programming is the backbone of algorithm design and combinatorial optimization.

The fastest known algorithms for solving linear programs, both in theory and practice, are

based on Interior-Point Methods: The core idea behind IPMs is to iteratively use Newton

approximation to reduce a linear program to a sequence of ~\sqrt{n} linear systems. Whether

this number of iterations can be improved is one of the major open problems in convex optimization.

A long line of work has shown that \sqrt{n} is indeed optimal for the special class of *self-concordant*

Barrier IPMs [Nestrov-Nemirovski94], but for general IPMs very little is known.

We propose a purely information-theoretic query model for studying the rate of convergence of IPMs,

via linear-system queries: Each iteration of the algorithm can adaptively specify an arbitrary diagonal

matrix H (an ellipsoid) and a vector v, and the oracle returns the least-squares minimizer of the linear

system \arg\min_x || Ax - v||_H. We show this model captures all known (deterministic) IPMs. Our main

result is an \Omega(\sqrt{n}) lower bound on the iterations of any deterministic algorithm in this model,

albeit for an (exponentially) ill-conditioned LP. In this talk I will describe this progress, assuming no prior

knowledge on IPMs.

Based on Joint work with Shunhua Jiang, Rasmus Kyng and Ming Ding.


MondayJul 11, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:David KargerTitle:Fast and Simple Algorithms for All-Terminal Network ReliabilityAbstract:opens in new windowin html    pdfopens in new window
The all-terminal network reliability problem asks to compute, for a given graph $G$, the probability that graph becomes disconnected when all edges fail independently with some probability. This fundamental problem in graph theory and reliability has been studied for decades, with the first fully polynomial randomized approximation scheme (FPRAS) given in 1995. In this work, I will describe a particularly simple polynomial time algorithm for this problem, with an equally simple proof of correctness. It relies on generating a low variance unbiased estimator using "partially sampled" graphs, then estimating their reliability recursively. The resulting algorithm is far simpler and much faster than the first one. The analysis introduces techniques that may be useful for a broader class of reliability problems. I will then sketch more sophisticated techniques that can be used to improve the running time of this approximation scheme. In particular I will show that as the edge failure probability decreases there is a rapid "phase transition" from a regime where the graph is likely disconnected, to one in which all cuts can be treated as failing independently which dramatically simplifies the problem. The analysis relies on some new analysis of the Contraction Algorithm---a stochastic process of contracting randomly chosen graph edges until the graph has been fully collapsed.
MondayJun 27, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Uri StemmerTitle:On the Robustness of CountSketch to Adaptive InputsAbstract:opens in new windowin html    pdfopens in new window
CountSketch is a popular dimensionality reduction technique that maps vectors to a lower dimension using randomized linear measurements while supporting the recovery of the L2-heavy hitters of the input vectors. We study the robustness of CountSketch in adaptive settings where input vectors may depend on the output from prior inputs. Adaptive settings arise in processes with feedback or with adversarial attacks. We show that
(1) CountSketch itself is not robust, and can be attacked with a number of queries of the order of the sketch size.
(2) A slight variation of CountSketch is robust, allowing for quadratic number of queries in the sketch size, which is an improvement factor of $\sqrt{k}$ over prior work (for k heavy hitters).
Based on a joint work with Edith Cohen, Xin Lyu, Jelani Nelson, Tamas Sarlos, and Moshe Shechner.
MondayJun 13, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Tomer KorenTitle:Benign underfitting of Stochastic Gradient Descent, and some other peculiarities Abstract:opens in new windowin html    pdfopens in new window
Why do gradient methods, such as Stochastic Gradient Descent (SGD), generalize well even when used to train predictors with far more parameters than training samples, and without any explicit form of capacity control (regularization)? In this talk, I will revisit this elusive question in the context of (stochastic) convex optimization and discuss some surprising theoretical results and phenomena that challenge the current conventional wisdom in machine learning. Based on joint works with Idan Amir, Amit Attia, Roi Livni, Yishay Mansour, Uri Sherman and Matan Schliserman.
MondayMay 30, 202216:00
Foundations of Computer Science SeminarRoom 155
Speaker:Aaron Bernstein Title:Negative-Weight Single-Source Shortest Paths in Near-linear TimeAbstract:opens in new windowin html    pdfopens in new windowZoom link: ** please note the unusual time **

We present a randomized algorithm that computes single-source shortest paths (SSSP) in O(m\log^8(n)\log W) time when edge weights are integral, can be negative, and are >= -W. This essentially resolves the classic negative-weight SSSP problem. The previous bounds are ~O((m+n^{1.5})\log W) and m^{4/3+o(1)}\log W. Near-linear time algorithms were known previously only for the special case of planar directed graphs. Also, independently of our work, the recent breakthrough on min-cost flow [Chen, Kyng, Liu, Peng, Probst-Gutenberg and Sachdeva] implies an algorithm for negative-weight SSSP with running time m^{1+o(1)}.

In contrast to all recent developments that rely on sophisticated continuous optimization methods and dynamic algorithms, our algorithm is simple: it requires only a simple graph decomposition and elementary combinatorial tools. In fact, ours is the first combinatorial algorithm for negative-weight SSSP to break through the classic O(m\sqrt{n}\log nW) bound from over three decades ago [Gabow and Tarjan SICOMP'89].

MondayMay 23, 202211:15
Foundations of Computer Science SeminarRoom 155
Speaker:Ron RothblumTitle:Proving as Fast as Computing: Succinct Arguments with Constant Prover OverheadAbstract:opens in new windowin html    pdfopens in new window
Succinct arguments are proof systems that allow a powerful, but untrusted, prover to convince a weak verifier that an input x belongs to a language L \in NP, with communication that is much shorter than the NP witness. Such arguments, which grew out of the theory literature, are now drawing immense interest also in practice, where a key bottleneck that has arisen is the high computational cost of proving correctness. In this work we address this problem by constructing succinct arguments for general computations, expressed as Boolean circuits (of bounded fan-in), with a strictly linear size prover. The soundness error of the protocol is an arbitrarily small constant. Prior to this work, succinct arguments were known with a quasi-linear size prover for general Boolean circuits or with linear-size only for arithmetic circuits, defined over large finite fields. Joint work Noga Ron-Zewi.
MondayApr 25, 202211:15
Foundations of Computer Science SeminarRoom 1
Speaker:Michal Feldman Title:Prophet and Secretary Online Algorithms for Graph MatchingAbstract:opens in new windowin html    pdfopens in new window
A common tension in market scenarios is choosing the right timing to commit to a decision. This tension is captured by the mathematical literature of optimal stopping theory, within two models that have become to be known as the secretary problem and the prophet inequality. In these models, a sequence of originally-unknown values arrive one by one. Upon arrival, the online algorithm observes the value and should decide whether or not to accept it. In secretary settings, the value sequence is arbitrary, but the values arrive in a uniformly random order. In prophet settings, every value is drawn from a known probability distribution, but the arrival order is arbitrary. In this talk I will shortly review the basic settings of secretary and prophet, and will present extensions to graph matching. These include matching in bipartite graphs with 1-sided vertex arrival and matching in general graphs (with general arrival). Based on joint work with Tomer Ezra, Nick Gravin and Zhihao Tang.
MondayJan 03, 202211:15
Foundations of Computer Science Seminar
Speaker:Or Zamir Title:Tight Space Complexity of the Coin ProblemAbstract:opens in new windowin html    pdfopens in new window

In the coin problem (also known as bias amplification) we are asked to distinguish, with probability at least 2/3, between n i.i.d. coins which are heads with probability (1/2+\beta) from ones which are heads with probability (1/2-\beta). We are interested in the space complexity of the coin problem, corresponding to the width of a read-once branching program solving the problem.

The coin problem becomes more difficult as \beta becomes smaller.
Statistically, it can be solved whenever \beta = \Omega(n^{-1/2}), using counting.
It has been previously shown that for \beta = O(n^{-1/2}), counting is essentially optimal (equivalently, width poly(n) is necessary [Braverman-Garg-Woodruff FOCS'20]).
On the other hand, the coin problem only requires O(\log n) width for \beta>n^{-c} for any constant c>\log_2(\sqrt{5}-1)\approx 0.306 (following low-width simulation of AND-OR tree of [Valiant JAlg'84]).

In this work, we close the gap between the bounds, showing a tight threshold between the values of \beta=n^{-c} where O(\log n) width suffices and the regime where poly(n) width is needed, with a transition at c=1/3.
This gives a complete characterization (up to constant factors) of the memory complexity of solving the coin problem, for all values of bias \beta.

We introduce new techniques in both bounds. For the upper bound, we give a construction based on recursive majority that does not require a memory stack of size \log n bits.
For the lower bound, we introduce new combinatorial techniques for analyzing progression of the success probabilities in read-once branching programs.

The talk is based on joint work with Mark Braverman and Sumhega Garg.



MondayDec 27, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Dor MinzerTitle:On global hypercontractivity and optimal testing of Reed-Muller codes.Abstract:opens in new windowin html    pdfopens in new window

Reed-Muller codes are an important family of error correcting codes that plays a crucial role in a number of results in theoretical computer science. For the field of size $q$ and degree $d$, it was proved [Bhattacharyya et al., Haramaty et al.] that the corresponding Reed-Muller code is testable with $C_q * q^{d}/\eps$ queries. We give a new, more natural proof of this result that achieves a better dependency of C_q on the field size q -- from a tower-type bound to a polynomial bound. Our proof applies more generally to the setting of affine lifted codes, and uses the notion of global hypercontractivity, a recent generalization of classical hypercontractive inequalities.

In this talk, we will give a gentle introduction to the concept of global hypercontractivity, and explain the relation of the to the problem of optimal testing of Reed-Muller codes.

Based on a joint work with Tali Kaufman.

MondayDec 20, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Thomas VidickTitle:Testing quantum systems in the high-complexity regimeAbstract:opens in new windowin html    pdfopens in new window
From carefully crafted quantum algorithms to information-theoretic security in cryptography, a quantum computer can achieve impressive feats with no classical analogue. Can their correct realization be verified? When the power of the device greatly surpasses that of the user, what means of control remain available to the user? In the talk I will focus on a concrete realization of this question, which is the problem of testing high-dimensional entanglement. I will describe how classical techniques, coming in particular from the proof of the PCP theorem, can be extended to the quantum setting. This extension leads to powerful quantum tests as well as a new perspective on classic constructions. Time permitting I will present applications to interactive proofs, to approximate representation testing, and to norm embeddings. The main technical result in the talk is (an extension of) joint work with Ji, Natarajan, Wright and Yuen, "Quantum soundness of testing tensor codes," arXiv:2111.08131, to appear in FOCS'21.
MondayDec 13, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Merav ParterTitle:New Diameter Reducing Shortcuts: Breaking the $O(\sqrt{n})$ BarrierAbstract:opens in new windowin html    pdfopens in new window

For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open.

In this talk, I will present the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.

Joint work with Shimon Kogan.

MondayNov 22, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Irit DinurTitle:locally testable codes with constant rate, distance, and localityAbstract:opens in new windowin html    pdfopens in new window
A locally testable code (LTC) is an error correcting code that admits a very efficient membership test. The tester reads a constant number of (randomly - but not necessarily uniformly - chosen) bits from a given word, and rejects words with probability proportional to their distance from the code. LTCs were initially studied as important components of PCPs, and since then the topic has evolved on its own. High rate LTCs could be useful in practice: before attempting to decode a received word, one can save time by first quickly testing if it is close to the code. An outstanding open question has been whether there exist LTCs that are "good" in the coding theory sense of having both constant relative distance and rate, and at the same time testable with a constant number of queries. In this talk, I will describe a construction of such codes based on a new object called a left-right Cayley complex. Joint work with Shai Evra, Ron Livne, Alex Lubotzky, and Shahar Mozes.
MondayNov 15, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Shay MozesTitle:On edit distance oracles and planar distance oraclesAbstract:opens in new windowin html    pdfopens in new window

In this talk I will discuss two related data structure problems. The first is to preprocess two strings S and T, each of length n, into a data structure that can report the optimal alignment score (Longest common subsequence / edit distance) between any substring of S and any substring of T. I will describe a data structure with query time (log n)^(2+o(1)), and construction time and space n^(2+o(1)). This is optimal up to polylogarithmic/subpolynomial factors since computing just the alignment between S and T cannot be done in subquadratic time assuming the strong exponential time hypothesis.


The second problem is to preprocess a directed weighted planar graph into a data structure that reports the distance between any two vertices. Using the concept of voronoi diagrams, dramatic progress was made on such distance oracles in the past few years. We now have oracles of almost linear n^{1+o(1)} size that answer queries in (log n)^(2+o(1) time. However, the construction time of these oracles is not nearly linear, but O(n^{3/2}), which is not known to be optimal.

I will describe how ideas developed for planar distance oracles lead to the data structure for the string alignment problem. The structure of the alignment graph allows for a simpler presentation of some of the techniques involved, and is further exploited to obtain a faster construction time. It is not uncommon that techniques originally developed for pattern matching problems are later extended and generalized to planar graphs. Here ideas propagate in the opposite direction; techniques originally developed for planar graphs are specialized to a pattern matching problem.


Based on joint works with Panos Charalampopoulos, Pawel Gawrychowski and Oren Weimann (SODA 18, STOC 2019, ICALP 2021)



MondayNov 01, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Rafael PassTitle:Cryptography from the Hardness of Kolmogorov ComplexityAbstract:opens in new windowin html    pdfopens in new window
Whether one-way functions (OWFs) exist is the most important outstanding problem in Cryptography. We will survey a recent thread of work (Liu-Pass, FOCS'20, Liu-Pass, STOC'21, Liu-Pass, Crypto'21) showing the equivalence of the existence of OWFs and (mild) average-case hardness of various problems related to time-bounded Kolmogorov complexity that date back to the 1960s. These result yield the first natural, and well-studied, computational problems characterizing the feasibility of the central private-key primitives and protocols in Cryptography. Based on joint works with Yanyi Liu.
MondayOct 04, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Nofar CarmeliTitle:The Fine-Grained Complexity of Answering Database QueriesAbstract:opens in new windowin html    pdfopens in new windowZoom at:
We wish to identify the queries that can be solved with close to optimal time guarantees over relational databases. Computing all query answers requires at least linear time before the first answer (to read the input and determine the answer's existence), and then we must allow enough time to print all answers (which may be many). Thus, we aspire to achieve linear preprocessing time and constant or logarithmic time per answer. A known dichotomy classifies Conjunctive Queries into those that admit such enumeration and those that do not: the main difficulty of query answering is joining tables, which can be done efficiently if and only if the join query is acyclic. However, the join query usually does not appear in a vacuum; for example, it may be part of a larger query, or it may be applied to a database with dependencies. We show how to use this context for more efficient computation and study how the complexity changes in these settings. Next, we aspire for an even more powerful solution for query answering: a structure that simulates an array containing the query answers. Such a structure can be used for example to enumerate all answers in a statistically meaningful order or to efficiently compute a boxplot of query answers. We call this simulation random access and study for which queries random access can be achieved with near-optimal guarantees. Our results are accompanied by conditional lower bounds showing that our algorithms can be applied to all tractable queries in some cases. Among our results, we show that a union of tractable Conjunctive Queries may be intractable w.r.t. random access, but a union of intractable Conjunctive Queries may be tractable w.r.t. enumeration.
MondayJun 28, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Keren Censor-HillelTitle:Distributed Subgraph Finding: Progress and ChallengesAbstract:opens in new windowin html    pdfopens in new window
This talk surveys the exciting recent progress made in understanding the complexity of distributed subgraph finding problems. I will discuss techniques and highlight some intriguing open problems.
MondayJun 21, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Noga Ron-ZewiTitle:LDPC Codes Achieve List Decoding CapacityAbstract:opens in new windowin html    pdfopens in new window
We show that random Low-Density Parity Check (LDPC) codes achieve list decoding capacity with high probability. These are the first graph-based codes shown to have this property. This result follows from two other more general results that may be of independent interest: 1. A simple characterization of the threshold for when ‘local’ combinatorial properties are likely to be satisfied by a random subspace over a finite field. 2. Any ‘local’ property that is satisfied with high probability by a random linear code is also satisfied with high probability by a random LDPC code. Based on joint work with Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters.
MondayJun 14, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Shay MoranTitle:What Is The Sample Complexity of Differentially Private Learning?Abstract:opens in new windowin html    pdfopens in new window
The increase in machine learning applications which involve private and personal data highlights the need for algorithms that handle the data *responsibly*. While this need has been successfully addressed by the field of differentially private machine learning, the cost of privacy remains poorly understood: How much data is needed for differentially private learning? How much more data does private learning require compared to learning without privacy constraints? We will survey some of the recent progress towards answering these questions in the distribution-free PAC model, including the Littlestone-dimension-based *qualitative* characterization and the relationship with online learning. If time allows, we will also discuss this question in more general (distribution- and data-dependent) learning models. Based on joint works with Noga Alon, Mark Bun, Roi Livni, and Maryanthe Malliaris
MondayMay 31, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Iftach HaitnerTitle:A Tight Lower Bound on Adaptively Secure Full-Information Coin FlipAbstract:opens in new windowin html    pdfopens in new window
In a distributed coin-flipping protocol, Blum [ACM Transactions on Computer Systems '83], the parties try to output a common (close to) uniform bit, even when some adversarially chosen parties try to bias the common output. In an adaptively secure full-information coin flip, Ben-Or and Linial [FOCS '85], the parties communicate over a broadcast channel and a computationally unbounded adversary can choose which parties to corrupt during the protocol execution. Ben-Or and Linial proved that the n-party majority protocol is resilient to O(\sqrt{n}) corruptions (ignoring poly-logarithmic factors), and conjectured this is a tight upper bound for any n-party protocol (of any round complexity). Their conjecture was proved to be correct for limited variant of single-turn (each party sends a single message) single-bit (a message is one bit) protocols, Lichtenstein, Linial, and Saks [Combinatorica '89], symmetric protocols Goldwasser, Kalai, and Park [ICALP '15], and recently for (arbitrary message length) single-turn protocols Kalai, Komargodski, and Raz [DISC '18]. Yet, the question for many-turn (even single-bit) protocols was left completely open. In this work we close the above gap, proving that no n-party protocol (of any round complexity) is resilient to \omega(\sqrt{n}) (adaptive) corruptions. That is, up to polylog factors, the single-bit, single-message majority protocol is the optimal protocol against adaptive corruptions.
MondayMay 24, 202111:15
Foundations of Computer Science SeminarRoom 155
Speaker:Oded GoldreichTitle:Non-adaptive vs Adaptive Queries in the Dense Graph Testing Model by Oded Goldreich and Avi WigdersonAbstract:opens in new windowin html    pdfopens in new window
We study the relation between the query complexity of adaptive and non-adaptive testers in the dense graph model. It has been known for a couple of decades that the query complexity of non-adaptive testers is at most quadratic in the query complexity of adaptive testers. We show that this general result is essentially tight; that is, there exist graph properties for which any non-adaptive tester must have query complexity that is almost quadratic in the query complexity of the best general (i.e., adaptive) tester. More generally, for every $q:\N\to\N$ such that $q(n)\leq{\sqrt n}$ and constant $c\in[1,2]$, we show a graph property that is testable in $\Theta(q(n))$ queries, but its non-adaptive query complexity is $\Theta(q(n)^c)$, omitting $\poly(\log n)$ factors and ignoring the effect of the proximity parameter $\epsilon$. Furthermore, the upper bounds hold for one-sided error testers, and are at most quadratic in $1/\epsilon$. These results are obtained through the use of general reductions that transport properties of ordered structured (like bit strings) to those of unordered structures (like unlabeled graphs). The main features of these reductions are query-efficiency and preservation of distance to the properties. This method was initiated in our prior work (ECCC, TR20-149), and we significantly extend it here.
MondayMar 01, 202110:30
Foundations of Computer Science Seminar
Speaker:David WajcTitle:Rounding Dynamic Matchings Against an Adaptive AdversaryAbstract:opens in new windowin html    pdfopens in new window
Dynamic algorithms address dynamically-evolving datasets, and show how to maintain solutions to problems of interest subject to small updates to the input much faster than re-computing these solutions from scratch after each update. For example, the dynamic matching problem asks us to maintain an approximate maximum matching under edge updates (additions+deletions). For this well-studied problem we know numerous dynamic algorithms, the fastest of which are randomized. Unfortunately, until recently, all these fast randomized algorithms assumed that the updates to the input are generated in advance, rather than adaptively, based on the algorithm's random coin tosses. This assumption, referred to as the oblivious adversary assumption in the literature, rules out the black-box use of these algorithms for speeding up other (dynamic or static) algorithms. In this talk, I will present my recent framework for obtaining (essentially) the same randomized bounds previously known to be obtainable against an oblivious adversary, but this time against an adaptive adversary.
MondayJan 18, 202116:00
Foundations of Computer Science Seminar
Speaker:Kira GoldnerTitle:Mechanism Design for a Matching Platform: On Multi-Dimensional Gains from Trade MaximizationAbstract:opens in new windowin html    pdfopens in new window

Consider the algorithm designer for a two-sided market with buyers and sellers, such as Airbnb or Uber. The platform's goal is to design a mechanism that optimizes the "gains from trade": the aggregate gain in the market due to the trades (e.g. in items, labor) that have occurred.

In this talk, I will present the first guarantees on gains from trade for a market that contains a multi-dimensional buyer, who is interested in multiple different kinds of items, and single-dimensional sellers, who each are only interested in selling the single item that they own. We present a logarithmic approximation to the optimal gains from trade in a parameter of the market. We also provide a logarithmic approximation in the number of sellers to the constrained-optimal gains from trade: optimal subject to standard truthfulness constraints. Our techniques come from a variety of domains: including online algorithms and revenue maximization.

Joint work with Yang Cai, Steven Ma, and Mingfei Zhao

MondayFeb 03, 202011:15
Foundations of Computer Science SeminarRoom 155
Speaker:Ron Rothblum Title:Local Proofs Approaching the Witness Length Abstract:opens in new windowin html    pdfopens in new window

Interactive oracle proofs (IOPs) are a hybrid between interactive proofs and PCPs. In an IOP the prover is allowed to interact with a verifier (like in an interactive proof) by sending relatively long messages to the verifier, who in turn is only allowed to query a few of the bits that were sent (like in a PCP).

For any NP relation for which membership can be decided in polynomial-time and bounded polynomial space (e.g., SAT, Hamiltonicity, Clique, Vertex-Cover, etc.) and for any constant \gamma>0, we construct an IOP with communication complexity (1+\gamma) \cdot n, where n is the original witness length. The number of rounds as well as the number of queries made by the IOP verifier are constant.

Joint work with Noga Ron-Zewi

MondayJan 20, 202011:15
Foundations of Computer Science SeminarRoom 155
Speaker:Katrina LigettTitle:A New Analysis of Differential Privacy’s Generalization GuaranteesAbstract:opens in new windowin html    pdfopens in new window

Many data analysis pipelines are adaptive: the choice of which analysis to run next depends on the outcome of previous analyses. Common examples include variable selection for regression problems and hyper-parameter optimization in large-scale machine learning problems: in both cases, common practice involves repeatedly evaluating a series of models on the same dataset. Unfortunately, this kind of adaptive re-use of data invalidates many traditional methods of avoiding over-fitting and false discovery, and has been blamed in part for the recent flood of non-reproducible findings in the empirical sciences. An exciting line of work beginning with Dwork et al. 2015 establishes the first formal model and first algorithmic results providing a general approach to mitigating the harms of adaptivity, via a connection to the notion of differential privacy. Unfortunately, until now, those results were primarily of information theoretic interest, only beating out the simple approach of gathering fresh data for every computation ("sample-splitting") at the scale of many millions of datapoints.

In this work, we give a new proof of the transfer theorem that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and gives structural insights that we expect will be useful elsewhere. We show: 1) that differential privacy ensures that the expectation of any query on the conditional distribution on datasets induced by the transcript of the interaction is close to its expectation on the data distribution, and 2) sample accuracy on its own ensures that any query answer produced by the mechanism is close to the expectation of the query on the conditional distribution. This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the conditional distribution after the mechanism has committed to its answers. The transfer theorem then follows by summing these two bounds, and in particular, avoids the "monitor argument" used to derive high probability bounds in prior work.

An upshot of our new proof technique is that the concrete bounds we obtain are substantially better than the best previously known bounds, even though the improvements are in the constants, rather than the asymptotics (which are known to be tight). As we show, our new bounds outperform the naive "sample-splitting" baseline at dramatically smaller dataset sizes compared to the previous state of the art, bringing techniques from this literature closer to practicality.

Joint work with: Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi (UPenn), and Moshe Shenfeld (HUJI).

This work appeared at ITCS 2020.

MondayJan 06, 202011:15
Foundations of Computer Science SeminarRoom 155
Speaker:Dr. Amir AbboudTitle:Some Connections Between the Conjectures in Fine-Grained ComplexityAbstract:opens in new windowin html    pdfopens in new window

Fine-grained complexity utilizes a small set of conjectures to derive conditional lower bounds for a large collection of problems. These conjectures concern the time complexity of a few core problems such as k-SAT, Orthogonal Vectors, 3SUM, k-Clique, and Set Cover.  The relationships between these conjectures are poorly understood.

This talk will discuss some connections between the conjectures, including a tight reduction from Weighted-k Clique to Orthogonal Vectors and new (quantum-inspired) findings about the Set Cover Conjecture.

MondayDec 23, 201911:15
Foundations of Computer Science SeminarRoom 155
Speaker:Yael Tauman Kalai Title:Non-Interactive Publicly Verifiable Delegation with Applications to PPAD HardnessAbstract:opens in new windowin html    pdfopens in new window

We construct a delegation scheme for delegating polynomial time computations. Our scheme is publicly verifiable and non-interactive in the common reference string (CRS) model. The soundness of our scheme is based on an efficiently falsifiable decisional assumption on groups with bilinear maps. Prior to this work, publicly verifiable non-interactive delegation schemes were only known under knowledge assumptions (or in the Random Oracle model) or under non-standard assumptions related to obfuscation or multilinear maps.

In addition, our scheme has two desirable features: The proofs are unambiguous, in the sense that it is hard to find two distinct proofs for the same statement, and are updatable in the sense that given a proof for the statement that a Turing machine M transitions from configuration C_0 to C_T in T steps, one can efficiently generate a proof for the statement that M transitions from configuration C_0 to C_{T+1} in T+1 steps.

We show that such a delegation scheme implies PPAD hardness, by following a similar approach to that of Choudhuri et al. (STOC2019), who obtained PPAD hardness based on an assumption related to the soundness of the Fiat-Shamir paradigm.


This is based on two joint works, both with Omer Paneth and Lisa Yang.

MondayDec 16, 201911:15
Foundations of Computer Science SeminarRoom 155
Speaker:Gil CohenTitle:Pseudo-random pseudo-distributionsAbstract:opens in new windowin html    pdfopens in new window

In this talk, we will discuss a new type of a pseudo-random object called a "pseudo-random pseudo-distribution". This object was introduced in the context of the BPL vs. L problem, and I will sketch a space-efficient construction of the latter for read-once branching programs that has near-optimal dependence on the error parameter. The talk is a distillation of a joint work with Mark Braverman and Sumegha Garg (the paper is available online:

MondayDec 02, 201911:15
Foundations of Computer Science SeminarRoom 155
Speaker:Seffi NaorTitle:Online Algorithms via ProjectionsAbstract:opens in new windowin html    pdfopens in new window

We present a new/old approach to the design of online algorithms via Bregman projections. This approach is applicable to a wide range of online problems and we discuss connections to previous work on online primal-dual algorithms. In particular, the k-server problem on trees and HSTs is considered. The projection-based algorithm for this problem turns out to have a competitive ratio that matches some of the recent results given by Bubeck et al. (STOC 2018), whose algorithm uses mirror-descent-based continuous dynamics prescribed via a differential inclusion.

Joint work with Niv Buchbinder, Anupam Gupta, and Marco Molinaro.

MondayNov 18, 201911:15
Foundations of Computer Science SeminarRoom 155
Speaker:Uri StemmerTitle:Privately Learning Thresholds: Closing the Exponential GapAbstract:opens in new windowin html    pdfopens in new window

We study the sample complexity of learning threshold functions under the constraint of differential privacy. Unlike the non-private case, where the sample complexity is independent of the domain size, it turns our that for private learning the sample complexity must depend on the domain size $X$. Our current understanding of this task places its sample complexity somewhere between $\log^*|X|$ and $2^{\log^*|X|}$, where at least three different algorithms are known with sample complexity exponential in $\log^*|X|$. In this work we reduce this gap significantly, and show that the sample complexity is at most polynomial in $\log^*|X|$.

Joint work with Haim Kaplan, Katrina Ligett, Yishay Mansour, and Moni Naor.

MondayJul 08, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Shikha Singh Title:The Mechanism Design Approach to Interactive Proofs Abstract:opens in new windowin html    pdfopens in new window

The model of interactive proofs, introduced nearly two and a half decade, is now increasingly widely being used to design computation-outsourcing protocols. In an interactive proof, an honest party interacts with powerful but strategic provers, to elicit from them the correct answer to a computational question. Classical interactive proofs assume that the provers are adversarial (i.e., they want to mislead the verifier) and cooperative (work together as a team).

In this talk, I will present my work on a new payment-based interactive-proof system, called rational proofs. In rational proofs, the provers are not adversarial but rational, that is, they want to maximize the payment received from the verifier. Using principles from mechanism design, I will show how these payments can be used to leverage correctness from multiple provers who are either cooperative or non-cooperative in nature. I will also present how the guarantees of rational proofs are related to the soundness and completeness guarantees of classical interactive proofs.

Bio: Shikha Singh is currently an Assistant Professor of Computer Science at Wellesley College and will be joining Williams College as an Assistant Professor in Fall 2019. She obtained her PhD in Computer Science from Stony Brook University and her Integrated MSc. in Mathematics and Computing from IIT Kharagpur. Her broad research interests include algorithmic game theory, algorithms and data structures for big data, and complexity theory.

MondayJun 17, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Dana Moshkovitz Title:Nearly Optimal Derandomization Under AssumptionsAbstract:opens in new windowin html    pdfopens in new window

Existing proofs that deduce BPP=P from circuit lower bounds convert randomized algorithms to deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms to deterministic algorithms with nearly minimal slowdown. Specifically, assuming exponential lower bounds against non-deterministic circuits we convert randomized algorithms that err rarely to deterministic algorithms with a similar run-time, and general randomized algorithms to deterministic algorithms whose run-time is slower by only a nearly linear factor.
Our results follow from a new connection between pseudo-entropy generators and locally list recoverable codes.

This is joint work with Dean Doron, Justin Oh and David Zuckerman

MondayJun 10, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Yuval Filmus Title:Twenty (simple) questionsAbstract:opens in new windowin html    pdfopens in new window

20 questions is one of the simplest examples of a combinatorial search game:
lice thinks of an English word, and Bob's job is to figure it out by asking Yes/No questions. The worst-case complexity of the game is clearly log n, so to spice things up,we assume that Alice chooses her input according to some distribution known to Bob, and Bob now aims to minimize the expected number of questions.
An optimal solution to this problem was found already in 1952. However, the optimal strategy could ask arbitrarily complex Yes/No questions. We ask what happens when we constrain Bob to asking only "simple" questions, and what happens if Alice is allowed to lie a few times.

Joint work with Yuval Dagan (MIT), Ariel Gabizon, Daniel Kane (UCSD) and Shay Moran (IAS).


MondayJun 03, 201914:30
Foundations of Computer Science SeminarRoom 1
Speaker:Ido Shahaf Title:Tight Tradeoffs in Searchable Symmetric EncryptionAbstract:opens in new windowin html    pdfopens in new window

A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). In this talk, we survey recent constructions and matching lower bounds, and discuss their underlying techniques.

Based on joint works with Gilad Asharov, Moni Naor, and Gil Segev.

MondayMay 27, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Rotem Oshman Title:Distributed Property Testing -- Progress and ChallengesAbstract:opens in new windowin html    pdfopens in new window

In this talk I will describe some recent work on distributed property testing in the networks with bounded bandwidth (the CONGEST model): we have a network of computing nodes communicating over some initially-unknown network graph, where every communication link can carry a bounded number of bits per round. Some simple-looking problems, such as checking if the network contains a 4-cycle, are known to be very hard in this model, and this motivates us to consider property testing instead of exact solutions.

I will describe distributed property testing algorithms for two problems: subgraph-freeness, where we wish to determine whether the network graph contains some fixed constant-sized subgraph H; and uniformity testing, where every node of the network draws samples from an unknown distribution, and our goal is to determine whether the distribution is uniform or far from uniform. I will also discuss lower bounds.

MondayMay 20, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Guy Rothblum Title:Gentle Measurement of Quantum States and Differential PrivacyAbstract:opens in new windowin html    pdfopens in new window

We prove a new connection between gentle measurement (where one wants to measure n quantum states, in a way that damages the states only by a little) and differential privacy (where one wants to query a database about n users, in a way that reveals only a little about any individual user). The connection is bidirectional, though with loss of parameters in going from DP to gentle measurement. Exploiting this connection, we present a new algorithm for approximating the outcomes of many measurements on a collection of quantum states, a task called "shadow tomography". The new algorithm has the advantages of being gentle and online (the measurements can be chosen adaptively).

Joint work with Scott Aaronson.

No prior knowledge about quantum mechanics or computing will be assumed.

MondayMay 13, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Nati Linial Title:On the large-scale geometry of graphsAbstract:opens in new windowin html    pdfopens in new window

A graph is automatically also a metric space, but is there anything interesting to say about such metric spaces? Many fascinating and concrete questions are encapsulated in the more general (and vague) question "to what extent can a finite graph emulate the properties of a infinite regular tree ?". We will see how this leads us to investigate expansion in graphs and questions about the large scale aspects of graph metrics including girth and diameter and the relations between the two. If time permits (probably not) I may also say a little about the local geometry of graphs.

This talk is based on many collaborations which I have had over the years, among my more recent collaborators are Michael Chapman, Yuval Peled and Yonatan Bilu. The talk requires no particular previous background and should be accessible to general mathematical audience.

MondayMay 06, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Nathan Keller Title:On linear threshold functions and local Chernoff inequalitiesAbstract:opens in new windowin html    pdfopens in new window

A linear threshold function (LTF) is a Boolean function f:{-1,1}^n -> {0,1} of the form f(x) = 1_{ \sum a_i x_i > t}, for some fixed coefficients a_i and some threshold t. LTFs play an important role in complexity theory, machine learning, and other fields.

In this talk we present a new approach that allows us obtaining sharp results on Fourier-theoretic properties of biased LTFs. In particular, we determine the exact asymptotic order of the total influence and of the degree-1 Fourier weight of any biased LTF, in terms of its maximal (normalized) coefficient and its expectation. This provides a sharp generalization of theorems proved by Matulef, O'Donnell, Rubinfeld, and Servedio (in the context of property testing), and settles a conjecture posed by Kalai et al.

Our main tools are 'local' forms of the classical Chernoff inequality, like the following one proved by Devroye and Lugosi (2008): Let {x_i} be independent random variables uniformly distributed in {-1, 1}, and let a_i be nonnegative numbers such that \sum a_i^2 =1. If for some t > 0, we have Pr [\sum a_i x_i > t] = b, then Pr[\sum a_i x_i > t + delta] < b/2 holds for delta < c/ \sqrt {log(1/b)}, where c is a universal constant. Such inequalities seem to be little-known and probably can be useful in other contexts as well.


Joint work with Ohad Klein.

MondayApr 29, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Carmit Hazay Title:Zero-Knowledge from MPC-in-the-Head: Constructions and ApplicationsAbstract:opens in new windowin html    pdfopens in new window

MPC-in-the-head is a novel paradigm introduced in the work of Ishai et al. [IshaiKOS09] and, roughly speaking, allows the design of a zero-knowledge proof system for any NP-relation by relying on any multiparty computation (MPC) protocol in a modular way. On a high-level, in this transformation, the prover emulates ``in-its-head'' an execution of an MPC protocol that securely evaluates the NP-relation on the witness and commits to the views of the parties induced by this run. The verifier then tests the veracity of the computation by challenging the prover to open (decommit) the views of a subset of the parties. The key insight in the compilation is that the soundness and zero-knowledge property directly reduces to the robustness and simulation guarantees of the underlying MPC protocol.

MondayApr 15, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Noga Ron-ZewiTitle:Improved List Decoding of Algebraic CodesAbstract:opens in new windowin html    pdfopens in new window
We show that Folded Reed-Solomon codes achieve list decoding capacity with constant list sizes, independent of the block length. Prior work yielded list sizes that are polynomial in the block length, and relied on elaborate subspace evasive machinery to reduce the list sizes to constant. We further show that multiplicity codes exhibit similar behavior, and use this to obtain capacity achieving locally list decodable codes with query complexity significantly lower than was known before. Based on joint work with Swastik Kopparty, Shubhangi Saraf, and Mary Wootters.
MondayApr 08, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Alex Grilo Title:Stoquastic PCP vs. RandomnessAbstract:opens in new windowin html    pdfopens in new window

The derandomization of MA, the probabilistic version of NP, is a long standing open question. In this talk, we connect this problem to a variant of another major problem: the quantum PCP conjecture. Our connection goes through the surprising quantum characterization of MA by Bravyi and Terhal. They proved the MA-completeness of the problem of deciding whether the groundenergy of a uniform stoquastic local Hamiltonian is zero or inverse polynomial. We show that the gapped version of this problem, i.e. deciding if a given uniform stoquastic local Hamiltonian is frustration-free or has energy at least some constant, is in NP. Thus, if there exists a gap-amplification procedure for uniform stoquastic Local Hamiltonians (in analogy to the gap amplification procedure for constraint satisfaction problems in the original PCP theorem), then MA = NP (and vice versa). Furthermore, if this gap amplification procedure exhibits some additional (natural) properties, then P = RP.  This is a joint work with Dorit Aharonov.

MondayApr 01, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Zihan Tan Title:Improved Bounds for the Excluded Grid TheoremAbstract:opens in new windowin html    pdfopens in new window

We study the Excluded Grid Theorem, a fundamental structural result in graph theory, that was proved by Robertson and Seymour in their seminal work on graph minors. The theorem states that there is a function f, such that for every integer g > 0, every graph of treewidth at least f(g) contains the (gxg)-grid as a minor. For every integer g>0, let f(g) be the smallest value for which the theorem holds. Establishing tight bounds on f(g) is an important graph-theoretic question. Robertson and Seymour showed that f(g) is at least of order g^2 log g). For a long time, the best known upper bounds on f(g) were super-exponential in g. The first polynomial upper bound of f(g) = O(g^98 poly log g) was proved by Chekuri and Chuzhoy. It was later improved to f(g) = O(g^36 poly log g), and then to f(g) = O(g^19 poly log g). In this talk we present our recent work that further improves this bound to f(g) = O(g^9 poly log g) via a simpler proof. Moreover, while there are natural barriers that seem to prevent the previous methods from yielding tight bounds for the theorem, it seems conceivable that the techniques proposed in this thesis can lead to even tighter bounds on f(g).

MondayMar 25, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Yuval IshaiTitle:Homomorphic Secret Sharing Abstract:opens in new windowin html    pdfopens in new window

A homomorphic secret-sharing scheme is a secret-sharing scheme that allows locally mapping shares of a secret to compact shares of a function of the secret. The talk will survey the current state of the art on homomorphic secret sharing, covering efficient constructions, applications in cryptography and complexity theory, and open questions.

MondayFeb 25, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Thatchaphol Saranurak Title:Expander decomposition: fast static and dynamic algorithmsAbstract:opens in new windowin html    pdfopens in new window

An (\epsilon,\phi)-expander decomposition of a graph G=(V,E) with m edges is a partition of vertices into clusters such that (1) each cluster induces subgraph with conductance at least \phi, and (2) the number of inter-cluster edges is at most \epsilon m. This decomposition has a wide range of applications including approximation algorithms for the unique game, fast algorithms for flow and cut problems, and dynamic graph algorithms.

I will describe a new algorithm for constructing (~O(\phi),\phi)-expander decomposition in ~O(m/\phi) time. This is the first nearly linear time algorithm when \phi is at least 1/polylog(m), which is the case in most practical settings and theoretical applications. Previous results either take \Omega(m^{1+o(1)}) time, or attain nearly linear time but with a weaker expansion guarantee where each output cluster is guaranteed to be contained inside some unknown expander.

Our technique can be easily extended to the dynamic setting where the graph undergoing updates. This talk is based on joint work with Di Wang [Saranurak Wang SODA'19].

MondayFeb 04, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Eylon Yogev Title:The Power of Distributed Verifiers in Interactive Proofs Abstract:opens in new windowin html    pdfopens in new window

We explore the power of interactive proofs with a distributed verifier. In this setting, the verifier consists of n nodes and a graph G that defines their communication pattern. The prover is a single entity that communicates with all nodes by short messages. The goal is to verify that the graph G belongs to some language in a small number of rounds, and with small communication bound, i.e., the proof size.

This interactive model was introduced by Kol, Oshman and Saxena (PODC 2018) as a generalization of non-interactive distributed proofs. They demonstrated the power of interaction in this setting by constructing protocols for problems as Graph Symmetry and Graph Non-Isomorphism -- both of which require proofs of (n^2)-bits without interaction.

In this work, we provide a new general framework for distributed interactive proofs that allows one to translate standard interactive protocols (i.e., with a centralized verifier) to ones where the verifier is distributed with a proof size that depends on the computational complexity of the verification algorithm run by the centralized verifier. We show the following:

* Every (centralized) computation performed in time O(n) on a RAM can be translated into three-round distributed interactive protocol with O(log n) proof size. This implies that many graph problems for sparse graphs have succinct proofs (e.g., testing planarity).

* Every (centralized) computation implemented by either a small space or by uniform NC circuit can be translated into a distributed protocol with O(1) rounds and O(log n) bits proof size for the low space case and polylog(n) many rounds and proof size for NC.

* We show that for Graph Non-Isomorphism, one of the striking demonstrations of the power of interaction, there is a 4-round protocol with O(log n) proof size, improving upon the O(n*log n) proof size of Kol et al.

* For many problems, we show how to reduce proof size below the seemingly natural barrier of log n. By employing our RAM compiler, we get a 5-round protocol with proof size O(loglog n) for a family of problems including Fixed Automorphism, Clique and Leader Election (for the latter two problems we actually get O(1) proof size).

* Finally, we discuss how to make these proofs non-interactive {\em arguments} via random oracles.

Our compilers capture many natural problems and demonstrate the difficulty in showing lower bounds in these regimes.
Joint work with Moni Naor and Merav Parter.

SundayFeb 03, 201913:30
Foundations of Computer Science SeminarRoom 155
Speaker:Fermi Ma Title:New Techniques for Obfuscating ConjunctionsAbstract:opens in new windowin html    pdfopens in new windowSPECIAL CRYPTOGRAPHY SEMINAR

A conjunction over a binary alphabet is a boolean function specified by a length n pattern of 0's, 1's and wildcards. On input bit strings of length n, the function outputs 1 if the input matches the pattern at all non wildcard positions. At CRYPTO 2018, Bishop et al. proposed a simple and elegant construction to obfuscate this class of functions by embedding the pattern in the error positions of a noisy Reed-Solomon codeword, and placing the codeword in a group exponent. They prove their scheme achieves a notion of security called "distributional virtual black box" in the generic group model for random conjunctions with at most 0.774n wildcards.
In this talk, I'll show how to abstract the Bishop et al. scheme to obtain a significantly more efficient "dual" scheme. In the generic group model, our scheme admits an intuitive proof of security and does not place any restrictions on the number of wildcards.
Next, I'll describe a simple modification to the construction that avoids encoding in a group exponent and is secure under the Learning Parity with Noise (LPN) assumption. At the heart of our security proof is a reduction from standard LPN to LPN with "structured error."

No prior knowledge of the Bishop et al. paper will be assumed.

MondayJan 28, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Shiri Chechik Title:Near-Optimal Approximate Decremental All Pairs Shortest PathsAbstract:opens in new windowin html    pdfopens in new window

Computing shortest paths is one of the fundamental problems of graph algorithms. The goal of *dynamic* all pairs shortest paths (APSP) is to maintain shortest path trees from all vertices as the edges of the graph change over time. The algorithm is said to be decremental if it handles only deletions, incremental if it handles only insertions and fully dynamic if it handles both deletions and insertions.

In this talk I will present a near optimal decremental algorithm that maintains approximate all pairs shortest paths.

MondayJan 21, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Dana Drachsler-Cohen Title:Practical Reliability of Systems Abstract:opens in new windowin html    pdfopens in new window

Recent years have seen the emergence of new kinds of software including deep learning, programmable computer networks, and blockchains. Unfortunately, these systems have been shown to suffer from critical safety and security errors, affecting their wider adoption. The goal of my research is to develop new automated program verification and synthesis techniques which ensure safety and reliability of these systems.

In this talk, I will start by introducing AI2, the first automated verifier for neural networks able to certify large convolutional models. The key idea behind AI2 is to bridge abstract interpretation and neural networks, enabling a sound over-approximation of a network’s behavior in a scalable manner. I will then briefly discuss DL2, a system which enables clean interaction with deep learning models by allowing users to pose queries in a declarative manner. Finally, I will demonstrate how automated program analysis and synthesis can address key security and reliability challenges in domains such as computer networks and blockchains, preventing severe outages and financial losses.

Bio: Dana Drachsler-Cohen is an ETH Postdoctoral Fellow at the department of Computer Science, ETH Zurich. Her research interests span automated reasoning, program synthesis and machine learning. She obtained her PhD from the Computer Science Department at the Technion in 2017.

MondayJan 14, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Chaya Keller Title:Improved lower and upper bounds on the Hadwiger-Debrunner numbersAbstract:opens in new windowin html    pdfopens in new window

A family of sets F is said to satisfy the (p,q)-property if among any p sets in F, some q have a non-empty intersection. Hadwiger and Debrunner (1957) conjectured that for any p > q > d there exists a constant c = c_d(p,q), such that any family of compact convex sets in R^d that satisfies the (p,q)-property, can be pierced by at most c points. The classical Helly's Theorem is equivalent to the fact that c_d(p,p)=1 (p > d).
In a celebrated result from 1992, Alon and Kleitman proved the conjecture. However, obtaining sharp bounds on the minimal such c_d(p,q), called `the Hadwiger-Debrunner numbers', is still a major open problem in combinatorial geometry.
In this talk we present improved upper and lower bounds on the Hadwiger-Debrunner numbers, the latter using the hypergraph container method.

Based on joint works with Shakhar Smorodinsky and Gabor Tardos.

MondayJan 07, 201914:30
Foundations of Computer Science SeminarRoom 155
Speaker:Julia Chuzhoy Title:Almost Polynomial Hardness of Node-Disjoint Paths in GridsAbstract:opens in new windowin html    pdfopens in new window

In the classical Node-Disjoint Paths (NDP) problem, we are given an n-vertex graph G, and a collection of pairs of its vertices, called demand pairs. The goal is to route as many of the demand pairs as possible, where to route a pair we need to select a path connecting it, so that all selected paths are disjoint in their vertices.

The best current algorithm for NDP achieves an $O(\sqrt{n})$-approximation, while, until recently, the best negative result was a roughly $\Omega(\sqrt{\log n})$-hardness of approximation. Recently, an improved $2^{\Omega(\sqrt{\log n})}$-hardness of approximation for NDP was shown, even if the underlying graph is a subgraph of a grid graph, and all source vertices lie on the boundary of the grid. Unfortunately, this result does not extend to grid graphs.

The approximability of NDP in grids has remained a tantalizing open question, with the best upper bound of $\tilde{O}(n^{1/4})$, and the best lower bound of APX-hardness. In this talk we come close to resolving this question, by showing an almost polynomial hardness of approximation for NDP in grid graphs.

Our hardness proof performs a reduction from the 3COL(5) problem to NDP, using a new graph partitioning problem as a proxy. Unlike the more standard approach of employing Karp reductions to prove hardness of approximation, our proof is a Cook-type reduction, where, given an input instance of 3COL(5), we produce a large number of instances of NDP, and apply an approximation algorithm for NDP to each of them. The construction of each new instance of NDP crucially depends on the solutions to the previous instances that were found by the approximation algorithm.


Joint work with David H.K. Kim and Rachit Nimavat.

MondayDec 31, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Omri Weinstein Title:Static Data Structure Lower Bounds Imply RigidityAbstract:opens in new windowin html    pdfopens in new window

 We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t >> log^2(n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s = (1+eps)n), would already imply a semi-explicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy, and Yekhanin, 2009). Our result further asserts that polynomial (t > n^eps) data structure lower bounds against near-maximal space, would imply super-linear circuit lower bounds for log-depth linear circuits (a four-decade open question). In the succinct space regime (s = n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak, 2006), and on a new worst-to-average case reduction for rigidity, which is of independent interest.
Joint work with Zeev Dvir (Princeton) and Alexander Golovnev (Harvard).

MondayDec 24, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Liron Cohen Title:Towards the Next Generation of Proof Assistants: Enhancing the Proofs–as–Programs ParadigmAbstract:opens in new windowin html    pdfopens in new window

As software has grown increasingly critical to our society's infrastructure, mechanically-verified software has grown increasingly important, feasible, and prevalent. Proof assistants have seen tremendous growth in recent years because of their success in the mechanical verification of high-value applications in many areas, including cyber security, cyber-physical systems, operating systems, compilers, and microkernels. These proof assistants are built on top of constructive type theory whose computational interpretation is given by the proofs-as-programs paradigm, which establishes a correspondence between formal proofs and computer programs. However, while both proof theory and programming languages have evolved significantly over the past years, the cross-fertilization of the independent new developments in each of these fields has yet to be explored in the context of this paradigm. This naturally gives rise to the following questions: how can modern notions of computation influence and contribute to formal foundations, and how can modern reasoning techniques improve the way we design and reason about programs?

In this talk I first demonstrate how using programming principles that go beyond the standard lambda-calculus, namely state and non-determinism, promotes the specification and verification of modern systems, e.g. distributed systems. I then illustrate the surprising fragility of proof assistants in the presence of such new computational capabilities, and outline my ongoing efforts to develop a more robust foundation. For the converse direction, I show how incorporating modern proof-theoretic techniques offers a more congenial framework for reasoning about hard programming problems and hence facilitates the verification effort.

MondayDec 10, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Nir BitanskyTitle:Weak Zero Knowledge Beyond the Black Box BarrierAbstract:opens in new windowin html    pdfopens in new window

Zero knowledge protocols are spectacular, allowing to prove NP statements without revealing anything but their validity. An essential element that enables this wonder is interaction. But how much interaction exactly is needed? This question has long challenged cryptographers and is yet to be settled under standard assumptions. In fact, the question appears to be equally challenging also for natural relaxations of the zero knowledge requirement. The difficulty in answering the round complexity question stems from that of a foundational question in cryptography --- what is the power of non-black-box reductions?

In this talk, I will explain this difficulty and present a new non-black-box technique that resolves, under standard assumptions, the round complexity of weak zero knowledge protocols (Dwork-Naor-Reingold-Stockmeyer '98). Specifically, assuming quasipolynomial hardness of the Learning with Errors problem and fully-homomorphic encryption, we construct a two message protocol, a setting where (full-fledged) zero knowledge is impossible.

The talk will assume no prior knowledge in cryptography. It is based on joint work with Dakshita Khurana and Omer Paneth (the paper can be found on

WednesdayNov 28, 201814:00
Foundations of Computer Science SeminarRoom 208
Speaker:Nicolas ReschTitle:Lossless dimension expanders via linearized polynomials and subspace designs Abstract:opens in new windowin html    pdfopens in new windowNote special time and place

For a vector space F^n over a field F, an (η, ß)-dimension expander of degree d is a collection of d linear maps Γ_j : F^n \to F^n such that for every subspace U of F^n of dimension at most ηn, the image of U under all the maps, ∑_{j=1}^d Γ_j(U), has dimension at least ßdim(U). Over a finite field, a random collection of d=O(1) maps Γ_j over excellent “lossless” expansion with high probability: ß ≈ d for η ≥ Ω(1/\eta). When it comes to a family of explicit constructions (for growing n), however, achieving even expansion factor β = 1 + ε with constant degree is a non-trivial goal. We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list decoding in the rank-metric. Our approach yields the following:

  • Lossless expansion over large fields; more precisely ß ≥ (1–ε)d and η ≥ (1–ε)/d with d=O_ε(1), when |F| ≥ Ω(n).
  • Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely ß ≥ Ω(δd) and η ≥ Ω(1/(δd)) with d = O_δ(1), when |F| ≥ n^δ.

Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Ω(1), 1+Ω(1))-dimension expanders of constant degree over all fields. An approach based on “rank condensing via subspace designs” led to dimension expanders with ß ≥ Ω(√d) over large finite fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree. Based on joint work with Venkatesan Guruswami and Chaoping Xing.

MondayNov 26, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Dana RonTitle:Title: 2,3,…,k: From approximating the number of edges to approximating the number of k-cliques (with a sublinear number of queries)Abstract:opens in new windowin html    pdfopens in new window

In this talk I will present an algorithms for approximating the number of k-cliques in a graph when given query access to the graph. This problem was previously studied for the cases of k=2 (edges) and k=3 (triangles). We give an algorithm that works for any k >= 3, and is actually conceptually simpler than the k=3 algorithm. We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries. Let n denote the number of vertices in the graph, m the number of edges, and C_k the number of k-cliques. We design an algorithm that outputs a (1+\epsilon)-approximation (with high probability) for C_k, whose expected query complexity and running time are O (\frac{n}{C_k^{1/k}}+\frac{m^{k/2}}{C_k}) poly (\log n, 1/\epsilon,k).

Hence, the complexity of the algorithm is sublinear in the size of the graph for C_k = \omega(m^{k/2-1}). Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (up to the dependence on \log n, 1/\epsilon and k).

This is joint work with Talya Eden and C. Seshadhri.

MondayNov 19, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Natan RubinTitle:Hitting Convex Sets with PointsAbstract:opens in new windowin html    pdfopens in new window

Given an underlying finite point set P in the plane, we seek a small set Q that would hit any convex set that contains at least an Epsilon-fraction of P. Such a set Q is called a weak Epsilon-net. The study of Epsilon-nets is central to Computational and Combinatorial Geometry, and it bears important connections to Statistical Learning Theory, Extremal Combinatorics, Discrete Optimization, and other areas.

It is an outstanding open problem to determine tight asymptotic bounds on weak Epsilon-nets with respect to convex sets. For any underlying point set in the plane we describe such a net whose cardinality is roughly proportional to Epsilon^{-3/2}. This is the first improvement of the over-25-year-old bound of Alon, Barany, Furedi, and Kleitman.

MondayNov 12, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Michal Dory Title:Distributed Approximation of k-edge-connected SubgraphsAbstract:opens in new windowin html    pdfopens in new window

In the minimum k-edge-connected spanning subgraph (k-ECSS) problem the goal is to find the minimum weight subgraph resistant to up to k-1 edge failures. This is a central problem in network design, and a natural generalization of the minimum spanning tree (MST) problem. In this talk, I will present fast randomized distributed approximation algorithms for k-ECSS in the CONGEST model.

MondayNov 05, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Moni NaorTitle:White- box vs. black-box search problems: a cryptographic perspectiveAbstract:opens in new windowin html    pdfopens in new window

Ramsey theory assures us that in any graph there is a clique or independent set of a certain size, roughly logarithmic in the graph size. But how difficult is it to find the clique or independent set? If the graph is given explicitly, then it is possible to do so while examining a linear number of edges. If the graph is given by a black-box, where to figure out whether a certain edge exists the box should be queried, then a large number of queries must be issued. But what if one is given a program or circuit for computing the existence of an edge? What if we are assured that the program is small without being given explicitly?

In this talk I will explore recent work on the complexity of search problems with guaranteed solution (the class TFNP) and the tight relationship with cryptographic assumptions and techniques.

Based on joint works with Pavel Hubacek, Ilan Komargodski and Eylon Yogev

MondayJul 02, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Ariel Procaccia Title:Extreme DemocracyAbstract:opens in new windowin html    pdfopens in new window

Technological advances have changed every aspect of our lives in recent decades, yet, for the most part, the same systems of democratic decision making have been in place for centuries. I will argue that computer scientists can help rethink the practice of democracy, as well as its potential applications. I will focus on three emerging paradigms that go far beyond your run-of-the-mill election: (i) liquid democracy, an approach that allows voters to transitively delegate their votes; (ii) participatory budgeting, whereby residents collectively decide how to spend their local government's budget; and (iii) virtual democracy, which employs instant elections among machine learning models of real voters to address the grand AI challenge of ethical decision making.

MondayJun 11, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Roy Schwartz Title:Trees for Vertex Cuts, Hypergraph Cuts and Minimum Hypergraph BisectionAbstract:opens in new windowin html    pdfopens in new window

In the Minimum Hypergraph Bisection problem, the vertex set of a hypergraph has to be partitioned into two parts of equal size so that the number of hyperedges intersecting both parts is minimized.
This problem is a natural generalization of the well-studied Minimum Bisection problem in graphs.
We present a sharp distinction between Minimum Bisection in hypergraphs and graphs.
Whereas it is well-known that all bi-criteria approximation algorithms for Minimum Bisection in graphs can be extended to hypergraphs with the exact same guarantees, we prove that this is not the case when considering true (i.e., non bi-criteria) approximation algorithms.
Specifically, we show that Minimum Hypergraph Bisection admits an $\tilde{\mathcal{O}}(\sqrt{n})$ approximation algorithm.
However, we also show that any $\alpha$-approximation algorithm for Minimum Hypergraph Bisection implies an approximation of $\Omega(\alpha^{-2})$ for Densest $k$-Subgraph.
Thus, assuming the exponential time hypothesis there is no efficient approximation algorithm for Minimum Hypergraph Bisection with an approximation ratio $n^{poly(\log{\log{n}})}$.

In particular, Minimum Hypergraph Bisection is much harder to approximate than Minimum Bisection in graphs, for which a logarithmic approximation algorithm exists.
If time permits, the problem of constructing trees that are cut sparsifiers for hypergraph and vertex cuts will also be discussed.
While similar trees lie at the heart of powerful algorithms for Minimum Bisection in graphs, we prove that this is not the case for hypergraphs.
Joint work with Harald R\"{a}cke and Richard Stotz.

MondayJun 04, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Or SattathTitle:Quantum Tokens for Digital SignaturesAbstract:opens in new windowin html    pdfopens in new window

The fisherman caught a quantum fish. "Fisherman, please let me go", begged the fish, "and I will grant you three wishes". The fisherman agreed. The fish gave the fisherman a quantum computer, three quantum signing tokens and his classical public key. The fish explained: "to sign your three wishes, use the tokenized signature scheme on this quantum computer, then show your valid signature to the king, who owes me a favor".
The fisherman used one of the signing tokens to sign the document "give me a castle!" and rushed to the palace. The king executed the classical verification algorithm using the fish's public key, and since it was valid, the king complied.
The fisherman's wife wanted to sign ten wishes using their two remaining signing tokens. The fisherman did not want to cheat, and secretly sailed to meet the fish. "Fish, my wife wants to sign ten more wishes". But the fish was not worried: "I have learned quantum cryptography following the previous story (The Fisherman and His Wife by the brothers Grimm). The quantum tokens are consumed during the signing. Your polynomial wife cannot even sign four wishes using the three signing tokens I gave you".
"How does it work?" wondered the fisherman. "Have you heard of quantum money? These are quantum states which can be easily verified but are hard to copy. This tokenized quantum signature scheme extends Aaronson and Christiano's quantum money scheme, and a variant by Zhandry, which is why the signing tokens cannot be copied".
"Does your scheme have additional fancy properties?" the fisherman asked. "Yes, the scheme has other security guarantees: revocability, testability and everlasting security. Furthermore, if you're at sea and your quantum phone has only classical reception, you can use this scheme to transfer the value of the quantum money to shore", said the fish, and swam away.

Joint work with Shalev Ben-David.

MondayMay 28, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Kevin Leyton-Brown Title:Learning as a Tool for Algorithm Design and Beyond-Worst-Case AnalysisAbstract:opens in new windowin html    pdfopens in new window

All known algorithms for solving NP-complete problems require exponential time in the worst case; however, these algorithms nevertheless solve many problems of practical importance astoundingly quickly, and are hence relied upon in a broad range of applications. This talk is built around the observation that "Empirical Hardness Models" - statistical models that predict algorithm runtime on novel instances from a given distribution - work surprisingly well. These models can serve as powerful tools for algorithm design, specifically by facilitating automated methods for algorithm design and for constructing algorithm portfolios. They also offer a statistical alternative to beyond-worst-case analysis and a starting point for theoretical investigations.

bio at


TuesdayMay 15, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Leonard Schulman Title:Explicit Binary Tree Codes with Polylogarithmic Size AlphabetAbstract:opens in new windowin html    pdfopens in new windowNOTE SPECIAL DAY (TUESDAY)

This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size.

For every constant delta < 1 we give an explicit binary tree code with distance delta and alphabet size poly(log n), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n).

As part of the analysis, we prove a bound on the number of positive integer roots a real polynomial can have in terms of its sparsity with respect to the Newton basis---a result of independent interest.

Joint work with G. Cohen and B. Haeupler

MondayMay 07, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Yuval Dagan Title:Detecting Correlations with Little Memory and CommunicationAbstract:opens in new windowin html    pdfopens in new window

We study the problem of identifying correlations in multivariate data, under information constraints:
Either on the amount of memory that can be used by the algorithm, or the amount of communi- cation when the data is distributed across several machines. We prove a tight trade-off between the memory/communication complexity and the sample complexity, implying (for example) that to detect pairwise correlations with optimal sample complexity, the number of required mem-ory/communication bits is at least quadratic in the dimension. Our results substantially improve those of Shamir (2014), which studied a similar question in a much more restricted setting. To the best of our knowledge, these are the first provable sample/memory/communication trade-offs for a practical estimation problem, using standard distributions, and in the natural regime where the memory/communication budget is larger than the size of a single data point. To derive our theorems, we prove a new information-theoretic result, which may be relevant for studying other information-constrained learning problems.
Joint work with Ohad Shamir

TuesdayMay 01, 201814:30
Foundations of Computer Science SeminarRoom Pekeris
Speaker:Igor Shinkar Title:Probabilistic Checking against Non-Signaling Strategies from Linearity TestingAbstract:opens in new windowin html    pdfopens in new windowNOTE UNUSUAL DATE AND ROOM

Non-signaling strategies are collections of distributions with certain non-local correlations that have been studied recently in the context of delegation of computation.
In this talk I will discuss a recent result showing that the Hadamard based PCP of [ALMSS'92] is sound against non-signaling strategies. As part of the proof, we study the classical problem of linearity testing [BLR'93] in the setting of non-signaling strategies, and prove that any no-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions.

Joint work with Alessandro Chiesa and Peter Manohar (UC Berkeley).

MondayApr 23, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Lior Gishboliner Title:A Generalized Turan Problem and Its ApplicationsAbstract:opens in new windowin html    pdfopens in new window

Our first theorem is a hierarchy theorem for the query complexity of testing graph properties with one-sided error; more precisely, we show that for every sufficiently fast-growing function f from (0,1) to the natural numbers, there is a graph property whose one-sided-error query complexity is precisely f(\Theta(\epsilon)). No result of this type was previously known for any f which is super-polynomial. Goldreich [ECCC 2005] asked to exhibit a graph property whose query complexity is exponential in 1/\epsilon. Our hierarchy theorem partially resolves this problem by exhibiting a property whose one-sided-error query complexity is exponential in 1/\epsilon. We also use our hierarchy theorem in order to resolve a problem raised by Alon and Shapira [STOC 2005] regarding testing relaxed versions of bipartiteness.

Our second theorem states that for any function f there is a graph property whose one-sided-error query complexity is at least f(\epsilon) while its two-sided-error query complexity is only polynomial in 1/\epsilon. This is the first indication of the surprising power that two-sided-error testing algorithms have over one-sided-error ones, even when restricted to properties that are testable with one-sided error. Again, no result of this type was previously known for any f that is super-polynomial.

The above theorems are derived from a graph theoretic result which we think is of independent interest, and might have further applications. Alon and Shikhelman [JCTB 2016] introduced the following generalized Turan problem: for fixed graphs H and T, and an integer n, what is the maximum number of copies of T, denoted by ex(n,T,H), that can appear in an n-vertex H-free graph? This problem received a lot of attention recently, with an emphasis on T = C_3, H=C_{2m+1}. Our third theorem gives tight bounds for ex(n,C_k,C_m) for all the remaining values of k and m.
Joint work with Asaf Shapira.

MondayApr 16, 201814:30
Foundations of Computer Science SeminarRoom 155
Speaker:Iftach HaitnerTitle:Computational Two-Party Correlation Abstract:opens in new windowin html    pdfopens in new window

We prove a dichotomy theorem for two-party protocols, and show that for every poly-time two-party protocol with single-bit output, at least one of following holds:

  • The protocol can be used to construct a key-agreement protocol.
  • For every constant ρ > 0 the parties' output is ρ -uncorrelated: let (X; Y; T) denote the parties' outputs and the protocol's transcript respectively. A protocol is &rho -uncorrelated if there exists an efficient "decorralizer" algorithm Decor, that when given a random transcript T, produces two numbers PA; PB, such that no efficient algorithm can distinguish (UPS ;UPB ; T) (where Up denotes a biassed coin with bias ρ from (X; Y; T), with distinguishing advantage larger than ρ.

Namely, if the protocol cannot be used to construct key-agreement, then its output distribution (X; Y; T) is trivial: it can be simulated non-interactively by the parties given public randomness (used to sample T). (The precise statement also has qualifiers of the form: "on infinitely many choices of the security parameter").
We use the above characterization to prove that (α= 24ε2)-correct differentially private symmetric protocol for computing XOR, implies the existence of key-agreement protocol. The above dependency between α and &epsilon is tight since an θ( ε2)-correct "-differentially private protocol for computing XOR is known to exists unconditionally. It also improves, in the ( ε,α)dependency aspect, upon Goyal et al. [ICALP '16] who showed that, for some constant c > 0, a c-correct "-differentially private protocol for computing XOR implies oblivious transfer. Our result extends to a weaker notion of di erential privacy in which the privacy only requires to hold against external observer. Interestingly, the reductions used for proving the above results are non black box.

Joint work with: Eran Omri and Kobbi Nissim and Ronen Shaltiel and Jad Silbak

MondayApr 09, 201814:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Shlomi Dolev Title:Overlay Security and Quantum Safe Public Key InfrastructureAbstract:opens in new windowin html    pdfopens in new window

The advancement in quantum computing, where Google, IBM, Microsoft, Intel are competing in the (exponentially growing) number of qubits in their (some already) commercial quantum computers that they produce, requires the reexamination of the Internet Security, and the public key infrastructure. The talk will describe the concept of overlay security together with blockchain based directories for establishing symmetric keys. When combined with nested Lamport signature and Merkle trees for digital signatures the result is a complete, easily implementable architecture with information theoretically secure communication, and hash based signatures.

MondayMar 26, 201814:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Gal Shahaf Title:Oblivious routing via random walksAbstract:opens in new windowin html    pdfopens in new window

We present novel oblivious routing algorithms for the splittable and the unsplittable multicommodity flow settings. Our algorithms for both models improve upon the state-of-the-art, in terms of running time and performance with respect to graphs that exhibit good expansion guarantees. As an intermediate step towards the unsplittable setting, we present a novel generalization of Valiant's classical load balancing scheme for packet-switched networks to arbitrary graphs, which is of independent interest. Our approach relies on diffusing traffic throughout the network and then regathering it to its destination, via iterative applications of the random walk operator. Consequently, the performance guarantees of our algorithms are derived from the convergence of the random walk operator to the stationary distribution and are expressed in terms of the spectral gap of the graph (which dominates the mixing time).

MondayMar 19, 201814:30
Foundations of Computer Science SeminarRoom 290C
Speaker:William Hoza Title:Typically-Correct Derandomization for Small Time and SpaceAbstract:opens in new windowin html    pdfopens in new window

Suppose a language L can be decided by a bounded-error randomized algorithm that runs in space S and time n * poly(S). We give a randomized algorithm for L that still runs in space O(S) and time n * poly(S) that uses only O(S) random bits; our algorithm has a low failure probability on all but a negligible fraction of inputs of each length. An immediate corollary is a deterministic algorithm for L that runs in space O(S) and succeeds on all but a negligible fraction of inputs of each length. We also discuss additional complexity-theoretic applications of our technique.

MondayJan 29, 201814:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Robert KrauthgamerTitle:Streaming Symmetric Norms via Measure ConcentrationAbstract:opens in new windowin html    pdfopens in new window

A long line of research studies the space complexity of estimating a norm l(x) in the data-stream model, i.e., when x is the frequency vector of an input stream consisting of insertions and deletions of items of n types. I will focus on norms l (in R^n) that are *symmetric*, meaning that l is invariant under sign-flips and coordinate-permutations, and show that the streaming space complexity is essentially determined by the measure-concentration characteristics of l. These characteristics are known to govern other phenomena in high-dimensional spaces, such as the critical dimension in Dvoretzky's Theorem. 

The family of symmetric norms contains several well-studied norms, such as all l_p norms, and indeed we provide a new explanation for the disparity in space complexity between p<=2 and p>2. We also obtain bounds for other norms that are useful in applications. 

Joint work with Jaroslaw Blasiok, Vladimir Braverman, Stephen R. Chestnut, and Lin F. Yang.

MondayJan 22, 201814:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Ofer Grossman Title:Reproducibility in Randomized Log-spaceAbstract:opens in new windowin html    pdfopens in new window

A curious property of randomized log-space search algorithms is that their outputs are often longer than their workspace. One consequence is that there is no clear way to reproduce the same output when running the algorithm twice on the same input. It is not feasible to store the random bits (or the output) of the previous run in log-space, and using new random bits in another execution can result in a different output. This leads to the question: how can we reproduce the results of a randomized log space computation of a search problem?

We will give a precise definition of this notion of "reproducibility". Then we will show that every problem in search-RL has a randomized log-space algorithm where the output can be reproduced. Reproducibility can be thought of as an extension of pseudo-determinism. Indeed, for some problems in search-RL we show pseudo-deterministic algorithms whose running time significantly improve on known deterministic algorithms.

Joint work with Yang Liu.

MondayJan 15, 201814:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Eylon Yogev Title:Distributed Computing Made Secure: A New Cycle Cover Theorem Abstract:opens in new windowin html    pdfopens in new window

In the area of distributed graph algorithms a number of network's entities with local views solve some computational task by exchanging messages with their neighbors. Quite unfortunately, an inherent property of most existing distributed algorithms is that throughout the course of their execution, the nodes get to learn not only their own output but rather learn quite a lot on the inputs or outputs of many other entities. This leakage of information might be a major obstacle in settings where the output (or input) of network's individual is a private information (e.g. distributed networks of selfish agents, decentralized digital currency such as Bitcoin, voting systems).
While being quite unfamiliar notion in the classical distributed setting, the notion of secure multi-party computation (MPC) is one of the main themes in the Cryptography community. Yet despite all extensive work in the area, no existing algorithm fits the framework of classical distributed models in which there are no assumptions on the graph topologies and only messages of bounded size are sent on the edges in each round.

In this work, we introduce a new framework for \emph{secure distributed graph algorithms} and provide the first \emph{general compiler} that takes any "natural" non-secure distributed algorithm that runs in $r$ rounds, and turns it into a secure algorithm that runs in $\widetilde{O}(r \cdot D \cdot poly(\Delta))$ rounds where $\Delta$ is the maximum degree in the graph and $D$ is its diameter. This round complexity is nearly optimal for bounded degree graphs.
The main technical part of our compiler is based on a new cycle cover theorem: We show that the edges of every bridgeless graph $G$ of diameter $D$ can be covered by a collection of cycles such that each cycle is of length $\widetilde{O}(D)$ and each edge of the graph $G$ appears in $\widetilde{O}(1)$ many cycles. This provides the basis for additional combinatorial constructions required by our compiler and might be of independent combinatorial and algorithmic interest.
Joint work with Merav Parter.

MondayJan 08, 201814:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Omer PanethTitle:Cryptography Outside the Black BoxAbstract:opens in new windowin html    pdfopens in new window

Computational problems whose input is a program are central in Cryptography, as well as Complexity, Learning, and Optimization. The nature of such problems crucially depends on the way the program is accessed -- as a black box or explicitly by its implementation.

In which settings can we exploit code to gain an advantage over black-box access? In Cryptography, we explore this question from two opposing perspectives:

Protecting Code: Can we obfuscate a program's code so that its functionality is preserved but it is otherwise unintelligible? Intuitively, such obfuscated code reveals nothing more than black-box access to the program. Obfuscation is, therefore, a powerful tool with numerous applications in software protection and Cryptography.

Exploiting Code: Most security proofs in cryptography consist of a reduction that translates any potential attacker into an algorithm solving some underlying hard problem. While most security reductions only require black-box access to the attacker, for many applications black-box reductions are provably insufficient. Can we exploit the attacker's code to prove security where black-box reductions fail?

In this talk, I will describe new techniques for protecting and exploiting code, taking advantage of the inherent tension between these two tasks. I will also demonstrate applications of these techniques in and beyond cryptography.

MondayJan 01, 201814:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Yael Tauman Kalai Title:Using the PIR Heuristic to Enhance Secrecy Abstract:opens in new windowin html    pdfopens in new window

The use of a computational PIR scheme has been instrumental in reducing interaction from interactive proofs, and in converting multi-prover interactive proofs to (single prover) 2-message computationally sound proofs (also known as arguments).

In this talk we will focus on the secrecy guarantees of this transformation.
We show that if we start with an interactive proof which is only *honest-verifier* zero-knowledge, and we use a quasi-poly secure *symmetric* PIR scheme (or a 2-message OT scheme) to reduce interaction, then the resulting 2-message argument is witness indistinguishable, and in the delayed-input setting it is distributional weak zero-knowledge (which implies strong witness indistinguishable and witness hiding in the delayed input setting). Moreover, under the same assumption (which can be instantiated from quasi-poly DDH/QR/N'th residuosity assumption), we construct a two-message argument with (similar) *statistical* secrecy guarantees. For the latter, we apply the PIR heuristic on a computationally sound proof, which is honest-verifier statistical zero-knowledge.

This is based on joint works with Abhishek Jain, Dakshita Khurana, Ron Rothblum and Amit Sahai.

SundayDec 31, 201714:30
Foundations of Computer Science SeminarRoom 208
Speaker:Shay Solomon Title:Dynamic graph matching and related problemsAbstract:opens in new windowin html    pdfopens in new windowPLEASE NOTE UNUSUAL DAY (SUNDAY) AND PLACE (ROOM 208)

Graph matching is one of the most well-studied problems in combinatorial optimization, with applications ranging from scheduling and object recognition to numerical analysis and computational chemistry.

Nevertheless, until recently very little was unknown about this problem in real-life **dynamic networks**, which aim to model the constantly changing physical world.

In the first part of the talk we'll discuss our work on dynamic graph matching, and in the second part we'll highlight our work on a few related problems.

MondayDec 25, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Or Meir Title:Prediction from Partial Information and Hindsight, with Application to Circuit Lower BoundsAbstract:opens in new windowin html    pdfopens in new window

Consider a random sequence of n bits that has entropy at least n-k, where k << n. A commonly used observation is that an average coordinate of this random sequence is close to being uniformly distributed, that is, the coordinate "looks random''. In this work, we prove a stronger result that says, roughly, that the average coordinate looks random to an adversary that is allowed to query about n/k other coordinates of the sequence, even if the adversary is non-deterministic.
As an application of this result, we prove a new result on depth-3 circuits, which recovers as a direct corollary the known lower bounds for the parity and majority functions, as well as a lower bound on sensitive functions due to Boppana.

MondayDec 18, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Avishay Tal Title:Improved Pseudorandomness for Unordered Branching Programs through Local MonotonicityAbstract:opens in new windowin html    pdfopens in new window

We present an explicit pseudorandom generator with polylog(n) seed length for read-once constant-width branching programs that can read their $n$ input bits in any order. This improves upon the work of Impagliazzo, Meka, and Zuckerman (FOCS, 2012), where they required seed length $n^{1/2+o(1)}$.

A central ingredient in our work is a bound on the Fourier spectrum of constant-width branching programs, settling a conjecture posed by Reingold, Steinke, and Vadhan (RANDOM, 2013).

Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs.

(Joint work with Eshan Chattopadhyay, Pooya Hatami, and Omer Reingold)


MondayDec 11, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Tomer Koren Title:Interplays between Machine Learning and OptimizationAbstract:opens in new windowin html    pdfopens in new windowJoint Foundations of Computer Science & Machine Learning and Statistics Seminar

Over the past two decades, machine learning has rapidly evolved and emerged as a highly influential discipline of computer science and engineering. One of the pillars of machine learning is mathematical optimization, and the connection between the two fields has been a primary focus of research. In this talk, I will present two recent works that contribute to this study, focusing on online learning---a central model in machine learning for sequential decision making and learning under uncertainty. In the first part of the talk, I will describe a foundational result concerned with the power of optimization in online learning, and give answer to the question: does there exist a generic and efficient reduction from online learning to black-box optimization? In the second part, I will discuss a recent work that employs online learning techniques to design a new efficient and adaptive preconditioned algorithm for large-scale optimization. Despite employing preconditioning, the algorithm is practical even in modern optimization scenarios such as those arising in training state-of-the-art deep neural networks. I will present the new algorithm along with its theoretical guarantees and demonstrate its performance empirically.

MondayDec 04, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Uri Stemmer Title:Practical Locally Private Heavy HittersAbstract:opens in new windowin html    pdfopens in new window

We present new heavy-hitters algorithms satisfying local-differential-privacy, with optimal or near-optimal worst-case error, running time, and memory. In our algorithms, the server running time is $\tilde O(n)$ and user running time is $\tilde O(1)$, hence improving on the prior state-of-the-art result of Bassily and Smith [STOC 2015] requiring $O(n^{5/2})$ server time and $O(n^{3/2})$ user time. With a typically large number of participants in local algorithms ($n$ in the millions), this reduction in time complexity is crucial for making locally-private heavy-hitters algorithms usable in practice.

Joint work with Raef Bassily, Kobbi Nissim, and Abhradeep Thakurta.

MondayNov 27, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Yakov Babichenko Title:Informational Bounds on Approximate Nash EquilibriaAbstract:opens in new windowin html    pdfopens in new window

The talk will discuss informational lower bounds of approximate Nash equilibrium in two complexity models: Query Complexity and Communication Complexity.
For both models we prove exponential (in the number of players) lower bound on the complexity of computing ε -Nash equilibrium, for constant value of approximation ε .

MondayNov 13, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Eric Balkanski Title:The Adaptive Complexity of Maximizing a Submodular FunctionAbstract:opens in new windowin html    pdfopens in new window

In this paper we study the adaptive complexity of submodular optimization. Informally, the adaptive complexity of a problem is the minimal number of sequential rounds required to achieve a constant factor approximation when polynomially-many queries can be executed in parallel at each round. Adaptivity is a fundamental concept that is heavily studied in computer science, largely due to the need for parallelizing computation. Somewhat surprisingly, very little is known about adaptivity in submodular optimization. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, to the best of our knowledge, all that is known to date is that the adaptive complexity is between 1 and Ω(n).Our main result in this paper is a tight characterization showing that the adaptive complexity of maximizing a monotone submodular function under a cardinality constraint is, up to lower order terms, θ(log n):We describe an algorithm which requires O(log n) sequential rounds and achieves an approximation that is arbitrarily close to 1/3; We show that no algorithm can achieve an approximation better than O(1 / log n) with fewer than O(log n / log log n) rounds. Thus, when allowing for parallelization, our algorithm achieves a constant factor approximation exponentially faster than any known existing algorithm for submodular maximization.  Importantly, the approximation algorithm is achieved via adaptive sampling and complements a recent line of work on optimization of functions learned from data. In many cases, we do not know the functions we optimize and learn them from labeled samples. Recent results show that no algorithm can obtain a constant factor approximation guarantee using polynomially-many labeled samples as in the PAC and PMAC models, drawn from any distribution. Since learning with non-adaptive samples over any distribution results in a sharp impossibility, we consider learning with adaptive samples where the learner obtains poly(n) samples drawn from a distribution of her choice in every round. Our result implies that in the realizable case, where there is a true underlying function generating the data, θ(log n) batches, up to lower order terms, of adaptive samples are necessary and sufficient to approximately "learn to optimize" a monotone submodular function under a cardinality constraint. This is joint work with Yaron Singer.

MondayNov 06, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Scott AaronsonTitle:New Results on Learning and Reconstruction of Quantum StatesAbstract:opens in new windowin html    pdfopens in new window

Given an unknown D-dimensional quantum state rho, as well as M two-outcome measurements E_1,...,E_M, how many copies of rho do we need, if we want to learn the approximate probability that E_i accepts rho for *every* i? In this talk, I'll prove the surprising result --I didn't believe it myself at first -- that one can achieve this using a number of copies that's polylogarithmic in both M and D. So, e.g., one can learn whether *every* size-n^3 quantum circuit accepts or rejects an n-qubit state, given only poly(n) copies of the state. To prove this will require first surveying previous results on measuring quantum states and succinctly describing them, including my 2004 postselected learning theorem, and my 2006 "Quantum OR Bound" (with an erroneous proof fixed in 2016 by Harrow, Lin, and Montanaro).

As time permits, I'll also discuss new joint work with Xinyi Chen, Elad Hazan, and Ashwin Nayak, which takes my 2006 result on PAC-learnability of quantum states, and extends to the setting of online learning. Here we show that, given a sequence of T two-outcome measurements on an n-qubit state, even if the sequence is chosen adversarially, one can still learn to predict the outcomes of those measurements with total regret O(n) (in the "realizable" case) or O(sqrt(Tn)) (in the "non-realizable" case).

No quantum computing background will be assumed.

MondayJul 10, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Rohit Gurjar Title:Derandomizing Isolation Lemma: a geometric approachAbstract:opens in new windowin html    pdfopens in new window
We present a geometric approach towards derandomizing the Isolation lemma for a given family, i.e., deterministically constructing a weight assingnment which ensures a unique minimum weight set in the family. The idea is to work with a polytope corresponding to the family of sets. In this talk, we present the approach in terms of general polytopes and describe a sufficient condition on the polytope for this approach to work. The approach gives a quasi-polynomially bounded weight assignment. Finally, we show that two specific families - perfect matchings in bipartite graphs and common base sets of two matroids - satisfy the required condition and thus, we get an isolating weight assignment for these cases. This also puts the two problems in quasi-NC. Based on joint works with Stephen Fenner and Thomas Thierauf.
MondayJul 03, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Omri WeinsteinTitle:Crossing the Logarithmic Barrier for Dynamic Boolean Data Structure Lower BoundsAbstract:opens in new windowin html    pdfopens in new window

We prove the first super-logarithmic lower bounds on the cell probe complexity of dynamic *boolean* (a.k.a. decision) data structure problems, a long-standing milestone in data structure lower bounds. We introduce a new technique and use it to prove a ~ log^{1.5}(n) lower bound on the operational time of a wide range of boolean data  structure problems, most notably, on the query time of dynamic range counting *over F_2* ([Patrascu07]). Proving a super-logarithmic lower bound for this problem was explicitly posed as one of five important open problems in the late Mihai Patrascu's obituary [Tho13]. This result also implies the first super-logarithmic lower bound for the classical 2D range counting problem,one of the most fundamental data structure problems in computational geometry and spatial databases. We derive similar lower bounds for boolean versions of dynamic polynomial evaluation and 2D "rectangle stabbing", and for the (non-boolean) problems of range selection and range median. Our technical centerpiece is a new way of "weakly" simulating dynamic data structures using efficient *one-way* communication protocols with small advantage over random guessing. This simulation involves a surprising excursion to low-degree (Chebychev) polynomials which may be of independent interest, and offers an entirely new algorithmic angle on the "cell sampling" method of Panigrahy et al. [PTW10].

Joint work with Kasper Green-Larsen and Huacheng Yu.

MondayJun 26, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Ilan CohenTitle:Randomized Online Matching in Regular GraphsAbstract:opens in new windowin html    pdfopens in new window

We study the classic bipartite matching problem in the online setting,  first introduced in the seminal work of Karp, Vazirani and Vazirani. Specifically, we consider the problem for the well-studied class of regular graphs. Matching in this class of graphs was studied extensively in the offline setting. In the online setting, an optimal deterministic algorithm, as well as efficient algorithms under stochastic input assumptions were known. In this work, we present a novel randomized algorithm with competitive ratio tending to one on this family of graphs, under adversarial arrival order. Our main contribution is a novel algorithm which achieves competitive ratio 1-O(\sqrt{\log d}/\sqrt{d}) in expectation on d-regular graphs. In contrast, we show that all previously-known online algorithms, such as the generally worst-case optimal ranking algorithm of Karp et al., are restricted to a competitive ratio strictly bounded away from one, even as d grows. Moreover, we show the convergence rate of our algorithm's competitive ratio to one is nearly tight, as no algorithm achieves competitive ratio better than 1-O(1/\sqrt{d}). Finally, we show that our algorithm yields a similar competitive ratio with high probability, as well as guaranteeing each offline vertex a probability of being matched tending to one.

ThursdayJun 15, 201711:00
Foundations of Computer Science SeminarRoom 141
Speaker:Ariel Procaccia Title:Computational Social Choice: For the PeopleAbstract:opens in new windowin html    pdfopens in new windowPLEASE NOTE UNUSUAL DAY, TIME AND ROOM

Computational social choice deals with algorithms for aggregating individual preferences or opinions towards collective decisions. AI researchers (including myself) have long argued that such algorithms could play a crucial role in the design and implementation of multiagent systems. However, in the last few years I have come to realize that the "killer app" of computational social choice is helping people -- not software agents -- make joint decisions. I will illustrate this theme through two recent endeavors:, a website that offers provably fair solutions to everyday problems; and, which provides optimization-driven voting methods.

Throughout the talk, I will devote special attention to the theoretical foundations and results that make these services possible.

MondayMay 29, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Amnon Ta-Shma Title:Almost Optimal eps biasAbstract:opens in new windowin html    pdfopens in new window
The question of finding an epsilon-biased set with close to optimal support size, or, equivalently, finding an explicit binary code with distance 1/2-epsilon and rate close to the Gilbert-Varshamov bound, attracted a lot of attention in recent decades. In this paper we solve the problem almost optimally and show an explicit epsilon-biased set over k bits with support size O(k/epsilon^{2+o(1)}). This improves upon all previous explicit constructions which were in the order of k^2/epsilon^2, k/epsilon^3 or (k/epsilon^2)^{5/4}. The result is close to the Gilbert-Varshamov bound which is O(k/epsilon^2) and the lower bound which is $Omega(k/epsilon^2 log(1/epsilon)). The main technical tool we use is bias amplification with the s-wide replacement product. The sum of two independent samples from a biased set is epsilon^2 biased. Rozenman and Wigderson showed how to amplify the bias more economically by choosing two samples with an expander. Based on that they suggested a recursive construction that achieves sample size O(k/epsilon^4). We show that amplification with a long random walk over the s-wide replacement product reduces the bias almost optimally.
ThursdayMay 25, 201714:30
Foundations of Computer Science SeminarRoom 208
Speaker:Swastik Kopparty Title:Locally testable and locally correctable codes approaching the Gilbert-Varshamov boundAbstract:opens in new windowin html    pdfopens in new windowPLEASE NOTE UNUSUAL DAY AND ROOM

We show that there exist binary locally testable codes (for all rates) and locally correctable codes (for low rates) with rate and distance approaching the Gilbert-Varshamov bound (which is the best rate-distance tradeoff known for general binary error-correcting codes). Our constructions use a number of ingredients: Thommesen's random concatenation technique, the Guruswami-Sudan-Indyk strategy for list-decoding concatenated codes, the Alon-Edmonds-Luby distance amplification method, and the local list-decodability and local testability of Reed-Muller codes. Interestingly, this seems to be the first time that local testability is used in the construction of locally correctable codes.

Joint work with Sivakanth Gopi, Rafael Oliveira, Noga Ron-Zewi and Shubhangi Saraf 

MondayMay 15, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Daniel KerenTitle:Monitoring Properties of Large, Distributed, Dynamic GraphsAbstract:opens in new windowin html    pdfopens in new window

Graphs that are prevalent in current applications (the Internet, Facebook etc.) are not only very large and highly dynamic, but also distributed between many servers, none of which sees the graph in its entirety. The distributed monitoring problem deals with the question of imposing conditions on the local graphs, such that as long as they hold, it is guaranteed that some desired property holds for the global graph.
While defining local conditions for linear properties (e.g. average degree) is relatively easy, they are more difficult to derive for non-linear functions over the graph. We propose a solution and a general definition of solution optimality, and demonstrate how to  apply it to two important graph properties -- spectral gap and number of triangles.  We also define an absolute lower bound on the communication overhead for distributed monitoring, and compare our algorithm to it, with good results. Performance improves as the graph becomes larger and denser -- that is, when distributing it is more important.

MondayMay 08, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Amos KormanTitle:From Ants to Query ComplexityAbstract:opens in new windowin html    pdfopens in new window

I will talk about my recent adventures with ants. Together with biologists we study P. longicornis ants as they collaboratively transport a large food item to their nest. This collective navigation process is guided by pheromones which are laid by individual ants. Using a new methodology to detect scent marks, we identify a new kind of ant trail characterized by very short and dynamic pheromone markings and highly stochastic navigation response to them. We argue that such a trail can be highly beneficial in conditions in which knowledge of individual ants regarding the underlying topological structure is unreliable. This gives rise to a new theoretical search model on graphs under unreliable guiding instructions, which is of independent computational interest. To illustrate the model, imagine driving a car in an unknown country that is in the aftermath of a major hurricane which has randomly flipped a certain small fraction of the road-signs. Under such conditions of unreliability, how can you still reach your destination fast? I will discuss the limits of unreliability that allow for efficient navigation. In trees, for example, there is a phase transition phenomenon that occurs roughly around the inverse of the square root of the maximal degree. That is, if noise is above this threshold then any algorithm cannot avoid finding the target in exponential time (in the original distance), while below the threshold we identify an optimal, almost linear, walking algorithm. Finally, I will discuss algorithms that under such a noisy model aim to minimize the number of queries to find a target (rather than the number of moves).

This talk is based on joint works with biologists from the Weizmann Institute: Ofer Feinerman, Udi Fonio, and others, and with CS researchers: Lucas Bockowski, Adrian Kosowski, and Yoav Rodeh.


MondayApr 24, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Luca TrevisanTitle:Some simple distributed network processesAbstract:opens in new windowin html    pdfopens in new window
We will describe network processes in which, at each step, each node communicates with its neighbors, or a random subset of neighbors, and it updates its state to be "more like" the state of the neighbors. In a discrete setting, where there is a finite set of possible states, each node node updates to the state held by a plurality of sampled neighbors. Here we show that, in a complete communication network, there is a quick convergence to a consensus, regardless of the initial configuration and even in the presence of adversarial faults. If the set of possible states is ordered, and nodes update to the median of neighbors, convergence was known to be even faster, but less robust to adversarial tampering. In a continuous setting, each node holds a bounded real number, and it updates to the average of sampled neighbors. Here we show that if the graph has a clustered structure, such as the one generated by the stochastic block model, the nodes can identify the cluster they belong to based on the evolution of the local state. This holds even in an asynchronous model in which only two nodes are active at a time, and the study of the latter setting leads to interesting questions about the concentration of the product of iid random matrices. (Based on joint work with Luca Becchetti, Andrea Clementi, Pasin Manurangsi, Emanuele Natale, Francesco Pasquale and Prasad Raghavendra.)
MondayApr 03, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Udi WiederTitle:Circuit based PSI via Cuckoo HashingAbstract:opens in new windowin html    pdfopens in new window

While there has been a lot of progress in designing efficient custom protocols for computing Private Set Intersection (PSI), there has been less research on using generic MPC protocols for this task. However, there are many variants of the set intersection functionality which seem hard to compute with existing custom protocols and are easy to compute with generic MPC based solutions (for example comparing the cardinality of the intersection with a threshold or measuring ad conversion rates). Generic protocols work over circuits which compute the intersection. For sets of size n the best known circuit constructions compute O(n  log n) comparisons.  In this work we propose new circuit-based protocols for computing variants of the intersection, with circuits computing only O(n) comparisons. Our constructions are based on a new variant of Cuckoo hashing in two dimensions. We employ several optimizations and determine experimentally the  required sizes of tables and circuits, and measure the runtime, showing that our protocol is more efficient in concrete terms than existing constructions. The proof technique is new and can be generalized to analyzing simple Cuckoo hashing as well as new variants.

MondayMar 27, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Prahladh Harsha Title:On polynomial approximations to AC0Abstract:opens in new windowin html    pdfopens in new window
In this talk, we will discuss some questions related to polynomial approximations of AC0. A classic result due to Tarui (1991) and Beigel, Reingold, and Spielman (1991), states that any AC0 circuit of size s and depth d has an ε-error probabilistic polynomial over the reals of degree at most (log(s/ ε))^O(d). We will have a re-look at this construction and show how to improve the bound to (log s)^{O(d)} ·log(1/ ε), which is much better for small values of ε. As an application of this result, we show that (log s)^{O(d)}· log(1/ ε)-wise independence fools AC0, improving on Tal's strengthening of Braverman's theorem that (log(s/ ε))^{O(d)}-wise independence fools AC0. Time permitting, we will also discuss some lower bounds on the best polynomial approximations to AC0.

Joint work with Srikanth Srinivasan.

MondayFeb 06, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Yannai A. GonczarowskiTitle:Efficient Empirical Revenue Maximization in Single-Parameter Auction EnvironmentsAbstract:opens in new windowin html    pdfopens in new window
We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in value-space, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain single-parameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments. Joint work with Noam Nisan.
MondayJan 30, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Laszlo Babai Title:Graph Isomorphism in quasipolynomial timeAbstract:opens in new windowin html    pdfopens in new window

The algorithm referred to in the title builds on Luks's powerful group-theoretic divide-and-conquer method (1980) and addresses the bottleneck situation where Luks's method fails to "divide".
Luks will continue to "conquer" if an alternative method "divides"; we develop such a partitioning technique.
In the talk we shall outline the algorithm and explain in some detail its group theoretic core, the "Unaffected Stabilizers Lemma" and the "Local Certificates" routine.  The Lemma is used to construct, somewhat implausibly, global automorphisms out of local information -- a key step toward the construction of combinatorial structures to which the partitioning method from the previous day's lecture will be applied, providing the required "divide" step.


MondayJan 23, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Kira GoldnerTitle:The FedEx ProblemAbstract:opens in new windowin html    pdfopens in new window
Consider the following setting: a customer has a package and is willing to pay up to some value v to ship it, but needs it to be shipped by some deadline d. Given the joint prior distribution from which (v, d) pairs are drawn, we characterize the auction that yields optimal revenue, contributing to the very limited understanding of optimal auctions beyond the single-parameter setting. Our work further demonstrates the importance of 'ironing' in revenue maximization, helping to illustrate why randomization is necessary to achieve optimal revenue. Finally, we strengthen the emerging understanding that duality is useful for both the design and analysis of optimal auctions in multi- parameter settings. Joint work with Amos Fiat, Anna Karlin, and Elias Koutsoupias.
SundayJan 22, 201712:15
Foundations of Computer Science SeminarRoom 290C
Speaker:Merav Parter Title:Graph Algorithms for Distributed NetworksAbstract:opens in new windowin html    pdfopens in new window

I will describe two branches of my work related to algorithms for distributed networks. The main focus will be devoted for Fault-Tolerant (FT) Network Structures. 
The undisrupted operation of structures and services is a crucial requirement in modern day communication networks. As the vertices and edges of the network may occasionally fail or malfunction, it is desirable to make those structures robust against failures.
FT Network Structures are low cost highly resilient structures, constructed on top of a given network, that satisfy certain desirable performance requirements concerning, e.g., connectivity, distance or capacity. We will overview some results on fault tolerant graph structures with a special focus on FT Breadth-First-Search.
The second part of the talk will discuss distributed models and algorithms for large-scale networks. Towards the end, we will see some connections between distributed computing and other areas such as EE and Biology.

MondayJan 09, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Ofer GrossmanTitle:Bipartite Perfect Matching in Pseudo-Deterministic NCAbstract:opens in new windowin html    pdfopens in new window

Pseudo-deterministic algorithms are randomized search algorithms that on different executions on the same input, output the same solution with high probability.
We will discuss how pseudo-deterministic algorithms bridge the gap  between randomized search and decision problems for problems in P and  in NC. Next, we will show a pseudo-deterministic NC algorithm for bipartite matching. Finally, we will show how pseudo-determinism can be used to save on  random bits used by classical randomized algorithms, and apply the  method to obtain an algorithm for RNC depth first search using only  O(log^2 n) random bits. This is joint work with Shafi Goldwasser.

MondayJan 02, 201714:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Gil CohenTitle:Recent advances in randomness extractors and their applicationsAbstract:opens in new windowin html    pdfopens in new window
We present recent developments in randomness extractors theory and applications to classical, long-standing, open problems such as Ramsey graphs constructions and privacy amplification protocols. This exciting progress heavily relies on two new pseudo-random primitives we call correlation breakers and independence-preserving mergers, which we discuss.
MondayDec 26, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Avi WigdersonTitle:Theory and applications of operator scalingAbstract:opens in new windowin html    pdfopens in new window

The Operator Scaling problem asks whether a set of complex matrices can be jointly moved to a certain canonical (isotropic) position. This problem has a remarkable number of myriad incarnations:  non-commutative algebra, invariant theory, arithmetic complexity, quantum information theory, analytic inequalities and more. We will describe an efficient algorithm solving all these related problems, and explain how their analysis combines ideas from all these areas. 
Through these connections, the algorithm can be shown to solve some  non-convex optimization problems, some systems of quadratic equations,  and some linear programs with exponentially many inequalities - all  these, and concrete examples we will give, suggest that it might be a  powerful algorithmic tool via reductions to these problems.
No special background will be assumed!
Joint on two joint works with Ankit Garg, Leonid Gurvits and Rafael Olivera.
This talk is longer than usual and has a two-hour slot.

MondayDec 19, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Shachar Lovett Title:Robust sensitivityAbstract:opens in new windowin html    pdfopens in new window

The sensitivity conjecture is a famous open problem in the theory of boolean functions. Let f be a boolean function defined on the hypercube. The sensitivity of a node x is the number of its neighbours in the hypercube, for which f give the opposite value as that it does on x. The sensitivity conjecture speculates that if all nodes have low sensitivity, then the function f must be simple. Concretely, all its Fourier mass is supported on levels with low hamming weight.

Recently, Gopalan et al [CCC 2016] conjectured a robust analogue of the sensitivity conjecture: if most of the nodes have low sensitivity, then most of the Fourier mass is supported on levels with low hamming weight. They also prove it under the stronger assumption that all nodes have low sensitivity. In this work, we prove this conjecture, with near tight quantitative bounds.

Joint work with Avishay Tal (IAS) and Jiapeng Zhang (UCSD).


MondayDec 12, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Ran GellesTitle:Optimal Resilience for Short-Circuit Noise in FormulasAbstract:opens in new windowin html    pdfopens in new window

We show an efficient method for converting a logic circuit of gates with fan-out 1 into an equivalent circuit that works even if some fraction of its gates are short-circuited, i.e., their output is short-circuited to one of their inputs. Our conversion can be applied to any circuit with fan-in k>= 2, yielding a resilient circuit whose size is polynomial in the size of the (non-resilient) input circuit.

The resilient circuit gives the correct output as long as less than 1/3 of the gates in any of its input-to-output paths are corrupted. Furthermore, we prove that a resilience level of 1/3 is optimal (maximal) for this type of faulty gates. This fully answers an open question by Kalai et al. (FOCS 2012).

Joint work with Mark Braverman and Michael Yitayew.

MondayDec 05, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Dor Minzer Title:Grassmann Graphs, Vertex Cover and 2-to-2 gamesAbstract:opens in new windowin html    pdfopens in new window

We discuss recent progress on hardness of 2-to-2 games, with  applications to the inapproximability of the Vertex-Cover problem.
A 2-to-2 game (which is a variant of Khot's well known unique  games), is defined by a graph where there is a variable in each node,  and a constraint of a specific structure defined on each edge. While  in unique games each edge- constraint must be a one-to-one  correspondence -- i.e. for each assignment to one node there is  exactly one assignent to the other node that satisfies the constraint -- in 2-to-2 games the correspondence has a "two-to-two" structure.
The goal is to distinguish between instances in which almost all of  the edge- constraints can be satisfied, and instances in which almost none of them can be satisfied simultaneously.
We present a new combinatorial hypothesis regarding Grassmann graphs,  and show that it implies that 2-to-2 games are NP-hard *in a certain sense*. As a consequence, the hypothesis implies that it is NP-hard to distinguish between graphs that have an independent set of fractional  size (1- 1/sqrt{2}), and graphs with no independent sets of any constant fractional size. This easily implies that it is NP-hard to  approximate the Vertex Cover problem within a factor \sqrt{2} - o(1).
The talk is mostly based on a joint work with Subhash Khot and Muli  Safra, nevertheless, we will also mention results from a more recent  extension, which is a joint work with Irit Dinur, Subhash Khot, Guy Kindler and Muli Safra.

MondayNov 28, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Shahar DobzinskiTitle:Computational Efficiency Requires Simple TaxationAbstract:opens in new windowin html    pdfopens in new window

We characterize the communication complexity of truthful mechanisms. Our departure point is the well known taxation principle. The taxation principle asserts that every truthful mechanism can be interpreted as follows: every player is presented with a menu that consists of a price for each bundle (the prices depend only on the valuations of the other players). Each player is allocated a bundle that maximizes his profit according to this menu. We define the taxation complexity of a truthful mechanism to be the logarithm of the maximum number of menus that may be presented to a player.

Our main finding is that in general the taxation complexity essentially equals the communication complexity. The proof consists of two main steps. First, we prove that for rich enough domains the taxation complexity is at most the communication complexity. We then show that the taxation complexity is much smaller than the communication complexity only in "pathological" cases and provide a formal description of these extreme cases.

Our approach yields several applications, including strengthening the solution concept with low communication overhead, fast computation of prices, and hardness of approximation by computationally efficient truthful mechanisms.

SundayNov 27, 201612:15
Foundations of Computer Science SeminarRoom 290C
Speaker:Julia Chuzhoy Title:New Hardness Results for Routing on Disjoint PathsAbstract:opens in new windowin html    pdfopens in new windowPlease note unusual day/time.

 In the classical Node-Disjoint Paths (NDP) problem, the input consists of an undirected n-vertex graph G, and a collection M of pairs of its vertices, called source-destination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via node-disjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(\sqrt n), while the best current negative result is a roughly \Omega(log^{1/2}n)-hardness of approximation. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves a \tilde{O}(n^{1/4})- approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is \tilde{O}(n^{9/19}). The best currently known lower bound for both these versions of the problem is APX- hardness.
In this talk we will show that NDP is 2^{\Omega(\log n)}-hard to approximate, unless all problems in NP have algorithms with running time n^{O(\log n)}. Our result holds even when the underlying graph is a planar graph with maximum vertex degree 3, and all source vertices lie on the boundary of a single face. We extend this result to the closely related Edge-Disjoint Paths problem, showing the same hardness of approximation ratio even for sub-cubic planar graphs with all sources lying on the boundary of a single face.
This is joint work with David H.K. Kim and Rachit Nimavat.

MondayNov 21, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Adi ShamirTitle:Memory-Efficient Algorithms for Finding Needles in HaystacksAbstract:opens in new windowin html    pdfopens in new window

One of the most common tasks in cryptography and cryptanalysis is to find some interesting event (a needle) in an exponentially large collection (haystack) of N=2^n possible events, or to demonstrate that no such event is likely to exist. In particular, we are interested in finding needles which are defined as events that happen with an unusually high probability of p>>1/N in a haystack which is an almost uniform distribution on N possible events. When the search algorithm can only sample values from this distribution, the best known time/memory tradeoff for finding such an event requires O(1/Mp^2) time given O(M) memory.

In this talk I will describe much faster needle searching algorithms in the common cryptographic setting in which the distribution is defined by applying some deterministic function f to random inputs. Such a distribution can be modeled by a random directed graph with N vertices in which almost all the vertices have O(1) predecessors while the vertex we are looking for has an unusually large number of O(pN) predecessors. When we are given only a constant amount of memory, we propose a new search methodology which we call NestedRho. As p increases, such random graphs undergo several subtle phase transitions, and thus the log-log dependence of the time complexity T on p becomes a piecewise linear curve which bends four times. Our new algorithm is faster than the O(1/p^2) time complexity of the best previous algorithm in the full range of 1/N < p < 1 , and in particular it improves the previous time complexity by a significant factor of \sqrt{N} for any p in the range N^(- 0.75) < p < N^(-0.5). When we are given more memory, we show how to combine the NestedRho technique with the parallel collision search technique in order to further reduce its time complexity. Finally, we show how to apply our new search technique to more complicated distributions with multiple peaks when we want to find all the peaks whose probabilities are higher than p.

Joint work with Itai Dinur, Orr Dunkelman and Nathan Keller.

MondayNov 14, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Benny ApplebaumTitle:Algebraic Attacks against Random Local Functions and Their CountermeasuresAbstract:opens in new windowin html    pdfopens in new window

Suppose that you have n truly random bits X=(X1,...,Xn) and you wish to use them to generate m>>n pseudorandom bits Y=(Y1,..., Ym) using a local mapping, i.e., each Yi should depend on at most d=O(1) bits of x. In the polynomial regime of m=n^s, s>1, the only known solution, originates from (Goldreich, ECCC 2000), is based on Random Local Functions: Compute Yi by applying some fixed (public) d-ary predicate P to a random (public) tuple of distinct inputs. In this talk, we will try to understand, for any value of s, how the pseudorandomness of the resulting sequence depends on the choice of the underlying predicate.
Based on joint work with Shachar Lovett.

MondayOct 31, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Guy RothblumTitle:Constant-Round Interactive Proofs for Delegating ComputationAbstract:opens in new windowin html    pdfopens in new window

Interactive proofs have had a dramatic impact on Complexity Theory and Cryptography. The celebrated IP=PSPACE Theorem [LFKN92,Shamir92] allows an all-powerful but untrusted prover to convince a polynomial-time verifier of the validity of extremely complicated statements (as long as they can be evaluated using polynomial space). The interactive proof system designed for this purpose requires a polynomial number of communication rounds. This talk will focus on studying the power of more efficient interactive proof systems.

Our main result is that for every statement that can be evaluated in polynomial time and bounded-polynomial space, there exists an interactive proof that satisfies the following strict efficiency requirements:

(1) The honest prover runs in polynomial time.

(2) The verifier is almost linear time (and under some conditions even sub linear).

(3) The interaction consists of only a constant number of communication rounds.

To obtain this result, we introduce and study several new notions for interactive proofs, which may be of independent interest. One of these notions is that of unambiguous interactive proofs, where the prover has a unique successful strategy. Another notion is that of probabilistically checkable interactive proofs (PCIPs) where the verifier only reads a few bits of the transcript in checking the proof (this could be viewed as an interactive extension of PCPs).

Joint work with Omer Reingold and Ron Rothblum.

SundayJun 26, 201614:30
Foundations of Computer Science SeminarRoom B
Speaker:Yi-Jun ChangTitle:An Exponential Separation Between Randomized and Deterministic Complexity in the LOCAL ModelAbstract:opens in new windowin html    pdfopens in new window

Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message:

1. Fast Δ-coloring of trees requires random bits: Building on the recent lower bounds of Brandt et al., we prove that the randomized complexity of Δ-coloring a tree with maximum degree Δ≥55 is Θ(log_Δ log n), whereas its deterministic complexity is Θ(log_Δ n) for any Δ≥3. This also establishes a large separation between the deterministic complexity of Δ-coloring and (Δ+1)-coloring trees.

2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in O(1)+o(log_Δ n) rounds can be transformed to run in O(log*n −log*Δ+1) rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires Ω(log_Δ n) time deterministically.

3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size n is at least its deterministic complexity on instances of size √ log n. This shows that a deterministic Ω(log_Δ n) lower bound for any problem implies a randomized Ω(log_Δ log n) lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model.

Joint work with Tsvi Kopelowitz and Seth Pettie.

MondayMay 30, 201614:30
Foundations of Computer Science SeminarRoom 290C
Speaker:Rotem Oshman Title:Two Applications of Communication Complexity in Distributed ComputingAbstract:opens in new windowin html    pdfopens in new window

In distributed systems, communication between the participants in the computation is usually the most expensive part of the computation. Theoretical models of distributed systems usually reflect this by neglecting the cost of local computation, and charging only for messages sent between the participants; in particular, we usually assume that the computation proceeds in rounds, and in each round, each participant can send only a limited number of bits. We are interested in characterizing the number of rounds required to perform various tasks.

In this talk I will describe two sets of results. The first concerns the complexity of distributed subgraph detection: we have n servers, each representing a node in an undirected graph, and each server receives as input its adjacent edges in the graph. The goal of the computation is to determine whether the global input graph contains some fixed subgraph. I will describe upper and lower bounds for several classes of subgraphs, through a connection to Turan numbers. The general case remains open.

In the second part of the talk I will describe recent work on multi- party number-in-hand communication and information complexity, and show a tight upper and lower bound for set disjointness in the shared blackboard model.

Joint work with Mark Braverman, Andrew Drucker and Fabian Kuhn.

MondayMay 23, 201614:30
Foundations of Computer Science Seminar
Speaker:Stephen ChestnutTitle:Beating CountSketch for heavy hitters in insertion streamsAbstract:opens in new windowin html    pdfopens in new windowROOM 155

The task of finding heavy hitters is one of the best known and well studied problems in the area of data streams.  In a sense, the strongest guarantee available is the L2 guarantee, which requires finding all items that occur at least eps*||f|| times in the stream, where the i-th coordinate of the vector f is the number of occurrences of i in the stream.  The first algorithm to achieve the L2 guarantee was the CountSketch (Charikar, Chen, and Farach-Colton ICALP'02), which, for constant eps, requires O(log n) words of memory and O(log n) update time. It is known to be space-optimal if the stream includes deletions.

In this talk I will discuss recent improvements that allow us to find L2 heavy hitters in O(1) memory and O(1) update time in insertion only streams.  The improvements rely on a deeper understanding of the AMS sketch (Alon, Matias, and Szegedy STOC'96) and similar sketches and draw on the theory of Gaussian processes.  This talk is based on joint work with Vladimir Braverman, Nikita Ivkin, Jelani Nelson, Zhengyu Wang, and David P. Woodruff in arxiv:1511.00661 and arxiv:1603.00759.

MondayMay 16, 201614:30
Foundations of Computer Science Seminar
Speaker:Ronitt RubinfeldTitle:Sampling CorrectorsAbstract:opens in new windowin html    pdfopens in new windowRoom 155

In many situations, sample data is obtained from a noisy or imperfect source. In order to address such corruptions, we propose the methodology of sampling correctors. Such algorithms use structure that the distribution is purported to have, in order to allow one to make "on-the-fly" corrections to samples drawn from probability distributions. These algorithms may then be used as filters between the noisy data and the end user. We show connections between sampling correctors, distribution learning algorithms, and distribution property testing algorithms. We show that these connections can be utilized to expand the applicability of known distribution learning and property testing algorithms as well as to achieve improved algorithms for those tasks.Warning:   This talk contains more questions than answers...

Joint work with Clement Canonne and Themis Gouleakis.

MondayMay 09, 201614:30
Foundations of Computer Science Seminar
Speaker:Ilan CohenTitle:Serving in the Dark should be done Non-UniformlyAbstract:opens in new windowin html    pdfopens in new windowRoom 155
We study the following balls and bins stochastic game between a player and an adversary: there are B bins and a sequence of ball arrival and extraction events. In an arrival event a ball is stored in an empty bin chosen by the adversary and discarded if no bin is empty. In an extraction event, an algorithm selects a bin, clears it, and gains its content.We are interested in analyzing the gain of an algorithm which serves in the dark without any feedback at all, i.e., does not see the sequence, the content of the bins, and even the content of the cleared bins (i.e. an oblivious algorithm). We compare that gain to the gain of an optimal, open eyes, strategy that gets the same online sequence. We name this gain ratio the loss of serving in the dark. The randomized algorithm that was previously analyzed is choosing a bin independently and uniformly at random, which resulted in a competitive ratio of about 1.69. We show that although no information is ever provided to the algorithm, using non-uniform probability distribution reduces the competitive ratio. Specifically, we design a 1.55-competitive algorithm and establish a lower bound of 1.5. We also prove a lower bound of 2 against any deterministic algorithm. This matches the performance of the round robin 2-competitive strategy. Finally, we present an application relating to a prompt mechanism for bounded capacity auctions.
MondayMay 02, 201614:30
Foundations of Computer Science SeminarRoom 261
Speaker:Merav Parter Title:MST in Log-Star Rounds of Congested CliqueAbstract:opens in new windowin html    pdfopens in new windowmoved to room 155

We present a randomized algorithm that computes a Minimum Spanning Tree (MST) in O(log^* n) rounds, with high probability, in the Congested Clique model of distributed computing. In this model, the input is a graph on n nodes, initially each node knows only its incident edges, and per round each two nodes can exchange O(log n) bits.

Our key technical novelty is an O(log^* n) Graph Connectivity algorithm, the heart of which is a (recursive) forest growth method, based on a combination of two ideas: a sparsity-sensitive sketching aimed at sparse graphs and a random edge sampling aimed at dense graphs.

Our result improves significantly over the $O(\log \log \log n)$ algorithm of Hegeman et al. [PODC 2015] and the $O(\log \log n)$ algorithm of Lotker et al. [SPAA 2003; SICOMP 2005].

Join work with Mohsen Ghaffari.

MondayApr 18, 201614:30
Foundations of Computer Science Seminar
Speaker:Danny HarnikTitle:Estimating the Unseen - from Theory to PracticeAbstract:opens in new windowin html    pdfopens in new windowROOM 155

Estimating the amount of distinct elements in a dataset by examining only a fraction of the data is known to be a hard problem, both theoretically and in practice.
Our work explores a breakthrough theoretical result by Valiant and Valiant from 2011 that presents a provably accurate method for doing such estimations.
Our goal is to put this theory into practice for the important task of estimating the deduplication ratio of a very large dataset. However, deploying this technique in a real world setting runs into significant obstacles.
In the talk I will describe new techniques that help bridging the gap and enable the use of this exciting approach. Our work achieves a major improvement over the current state of the art practical solutions.

The talk is for a general audience, no prior knowledge is assumed.

Based on joint work  with Dmitry Sotnikov and Ety Khaitzin that appeared at Usenix FAST 2016.

MondayApr 04, 201614:30
Foundations of Computer Science Seminar
Speaker:Tali KaufmanTitle:Bounded degree high dimensional expandersAbstract:opens in new windowin html    pdfopens in new windowROOM 155

Expander graphs are widely studied, and various methods are known to obtain bounded degree expander graphs. Recently, there is a growing interest in understanding combinational expansion in higher dimensions (higher dimensional simplicial complexes). However, bounded degree combinatorial expanders (random or explicit) were not known till our work.

We present a local to global criterion on a complex that implies combinatorial expansion. We use our criterion to present explicit bounded degree high dimensional expanders. This solves in the affirmative an open question raised by Gromov, who asked whether bounded degree high dimensional expanders could at all exist.

We expect that the emerging theory of high dimensional expansion is likely to have various application in the theory of computation. Thus, one of the goals of this talk in to introduce this concept to the theory community.

Based on joint works with David Kazhdan and Alex Lubotzky, and with Shai Evra.

MondayMar 14, 201614:30
Foundations of Computer Science SeminarRoom 261
Speaker:Yaron Singer Title:Some limitations and possibilities of data-driven optimizationAbstract:opens in new windowin html    pdfopens in new window
As we grow highly dependent on data for making predictions, we translate these predictions into models that help us make informed decisions. But how do the guarantees we have on predictions translate to guarantees on decisions? In many cases, we learn models from sampled data and then aim to use these models to make decisions. This intuitive approach turns out to have non-trivial limitations. In some cases, despite having access to large data sets, the current frameworks we have for learnability do not suffice to guarantee desirable outcomes. In other cases, the learning techniques we have introduce estimation errors which can result in poor outcomes and stark impossibility results. In this talk we will formalize some of these ideas using convex and combinatorial optimization and discuss some possibility and impossibility results of this agenda.
MondayFeb 15, 201614:30
Foundations of Computer Science SeminarRoom 261
Speaker:Henry YuenTitle:Anchoring games for parallel repetitionAbstract:opens in new windowin html    pdfopens in new window

Raz's celebrated Parallel Repetition Theorem shows that the probability of simultaneously winning n independent instances of a two-player one-round game G is exponentially small in n, when the maximum success probability of G is less than 1. Though the statement is intuitive, the proof is rather nontrivial and has found important application in hardness of approximation, cryptography, and communication complexity.

There are two major open problems regarding the parallel repetition of games: does an analogue of Raz's theorem hold for (a) games with more than  two players, and (b) games with quantumly entangled players? Extending Raz’s theorem to these settings is a challenging problem for a number of reasons: techniques for attacking direct sum/direct product problems in multiparty settings are lacking, and our understanding of quantum entanglement as an information theoretic resource is quite limited.

In this work, we show to sidestep these barriers and make progress on the two open problems. We first prove exponential-decay parallel repetition theorems for a class of games we called "anchored games" in the multiplayer and entangled-player settings. Then, we show how to efficiently transform any game into an equivalent anchored game.  Together, our results provide a simple hardness-amplification technique for games in both the classical multiplayer and quantum settings.

Joint work with Mohammad Bavarian and Thomas Vidick.

MondayJan 25, 201614:30
Foundations of Computer Science SeminarRoom 261
Speaker:Uri StemmerTitle:Algorithmic Stability for Adaptive Data AnalysisAbstract:opens in new windowin html    pdfopens in new window

Adaptivity is an important feature of data analysis - the choice of questions to ask about a dataset often depends on previous interactions with the same dataset.  However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn.  Recent work by Dwork et al. (STOC, 2015) initiated the formal study of this problem, and gave the first upper bounds on the achievable generalization error for adaptive data analysis.

The results of Dwork et al. are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that stable algorithms of the kind guaranteed by differential privacy imply low generalization error. We also show that weaker stability guarantees such as bounded KL divergence and total variation distance lead to correspondingly weaker generalization guarantees.

Joint work with Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, and Jonathan Ullman.

SundayJan 24, 201612:30
Foundations of Computer Science SeminarRoom 261
Speaker:Moshe Y. VardiTitle:The SAT Revolution: Solving, Sampling, and CountingAbstract:opens in new windowin html    pdfopens in new windowNOTE UNUSUAL DAY AND TIME

For the past 40 years computer scientists generally believed that NP-complete problems are intractable. In particular, Boolean satisfiability (SAT), as a paradigmatic NP-complete problem, has been considered to be intractable. Over the past 20 years, however, there has been a quiet, but dramatic, revolution, and very large SAT instances are now being solved routinely as part of software and hardware design.

In this talk I will review this amazing development and show that we can leverage SAT solving to accomplish other Boolean reasoning tasks.  Counting the number of satisfying truth assignments of a given Boolean formula or sampling such assignments uniformly at random are fundamental computational problems in computer science with numerous applications. While the theory of these problems has been thoroughly investigated in the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances.  Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and Satisfiability Modulo Theory, that scales to formulas with hundreds of thousands of variable without giving up correctness guarantees.

MondayJan 04, 201614:30
Foundations of Computer Science SeminarRoom 261
Speaker:Noga Ron-Zewi Title:Fast Sublinear Algorithms for Error Detection and CorrectionAbstract:opens in new windowin html    pdfopens in new window

In today’s world there are huge amounts of data that need to get reliably stored or transmitted. However, some amount of noise or corruption is inevitable. An error-correcting code is a scheme for robustly representing data in the form of a codeword that allows one to detect and correct errors in transmission. Locally-testable and locally-decodable codes are special families of error-correcting codes that admit highly efficient algorithms that detect and correct errors in sublinear time with high probability, probing only a small number of entries of the corrupted codeword. While locally-testable and locally-decodable codes have been intensely studied in the past 2 decades, in recent years there has been even further incentive for their study due to their relevance for transmission and storage of massive data and the successful implementation of local codes in cloud storage systems.

In this talk, I will show an exponential improvement on the best-known running time of error detection and correction algorithms for locally-testable and locally-decodable codes.  Specifically, I will describe new families of locally-testable codes with constant rate that can detect a constant fraction of errors in time (log n)^{O(log log n)} and new families of locally-decodable codes of constant rate that can correct a constant fraction of errors in time exp(\sqrt{log n}). Prior to that, the best known running time for such codes was n^{epsilon} (for a constant epsilon) using several, quite different, constructions.

(Based on joint work with Swastik Kopparty, Or Meir and Shubhangi Saraf)

SundayJan 03, 201616:15
Foundations of Computer Science SeminarRoom A
Speaker:Ilya Razenshteyn Title:Locality-Sensitive Hashing and BeyondAbstract:opens in new windowin html    pdfopens in new windownote unusual day/room

Locality-Sensitive Hashing (LSH) is a powerful technique for the approximate nearest neighbor search (ANN) in high dimensions. In this talk I will present two recent results:

1) I will show a data structure for ANN for the Euclidean distance that provably outperforms the best possible LSH-based data structure. We proceed via designing a good *data-dependent* hash family.

2) I will show a practical and optimal LSH family for the cosine similarity (a.k.a. Euclidean distance on a sphere). It substantially outperforms the celebrated Hyperplane LSH family. Along the way, I will try to debunk two popular myths about LSH:
     * LSH-based data structures consume too much memory and are thus impractical;
     * Optimal LSH constructions are too complicated to be made practical.

The talk is based on two papers: arXiv: 1501.01062 (joint with Alexandr Andoni, STOC 2015) and arXiv: 1509.02897 (joint with Alexandr Andoni, Piotr Indyk, Thijs Laarhoven and Ludwig Schmidt, NIPS 2015).

SundayJan 03, 201612:30
Foundations of Computer Science SeminarRoom 261
Speaker:Dana Moshkovitz Title:Amplification and Derandomization Without SlowdownAbstract:opens in new windowin html    pdfopens in new windownote unusual day/time

We show techniques for decreasing the error probability of randomized algorithms and for converting randomized algorithms to deterministic (non-uniform) algorithms. Unlike most existing techniques that involve repetition of the randomized algorithm and hence a slowdown, our techniques produce algorithms with a similar run-time to the original randomized algorithms.
The amplification technique is related to a certain stochastic multi-armed bandit problem. The derandomization technique -- which is the main contribution of this work -- points to an intriguing connection between derandomization and sketching/sparsification.
We demonstrate the techniques by showing applications to max-cut on dense graphs, approximate clique, constraint satisfaction problems on dense bipartite graphs, and list decoding to unique decoding for Reed-Muller code.
This is joint work with Ofer Grossman.

MondayDec 28, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Amir AbboudTitle:Hardness in PAbstract:opens in new windowin html    pdfopens in new window

The class P attempts to capture the efficiently solvable computational tasks. It is full of practically relevant problems, with varied and fascinating combinatorial structure.
In this talk, I will give an overview of a rapidly growing body of work that seeks a better understanding of the structure within P. Inspired by NP-hardness, the main tool in this approach are combinatorial reductions. Combining these reductions with a small set of plausible conjectures, we obtain tight lower bounds on the time complexity of many of the most important problems in P.
I will present the most recent landscape of P and the conjectures on which this project is based on (e.g. the Strong Exponential Time Hypothesis). I will discuss recent attempts on identifying new conjectures: either more reliable ones, or ones that will get us closer to a full classification of the important problems in P.
Finally, I will highlight a surprising new reduction from Circuit-SAT to natural problems in P like Edit-Distance which proves that minor improvements over the quadratic running time of Edit-Distance are enough to prove major complexity separations.

MondayDec 14, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Omri WeinsteinTitle:Towards The Deterministic Communication Complexity of Approximate Nash EquilibriumAbstract:opens in new windowin html    pdfopens in new window

We study the two-party communication complexity of the geometric problem of finding an approximate Brouwer fixed-point of a composition of two Lipschitz functions g*f, where Alice knows f and Bob knows g.
We prove an essentially tight communication lower bound on this problem, using a novel adaptation of the Raz-McKenzie simulation theorem into geometric settings.
We show that a slightly stronger version of this communication problem would imply an (essentially) tight communication lower bounds on the problem of finding an approximate Nash equilibrium in 2-player (and n-player) games, where each player initially knows only his own payoff matrix. 

Joint work with Tim Roughgarden.

MondayNov 30, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Alan RoytmanTitle:Packing Small VectorsAbstract:opens in new windowin html    pdfopens in new window

Online d-dimensional vector packing models many settings such as minimizing resources in data centers where jobs have multiple resource requirements (CPU, Memory, etc.).  However, no online d-dimensional vector packing algorithm can achieve a competitive ratio better than d.  Fortunately, in many natural applications, vectors are relatively small, and thus the lower bound does not hold.  For sufficiently small vectors, an O(log d)-competitive algorithm was known.  We improve this to a constant competitive ratio, arbitrarily close to e (where e is the base of the natural logarithm), given that vectors are sufficiently small.

We give improved results for the two dimensional case.  For arbitrarily small vectors, the First Fit algorithm for two dimensional vector packing is no better than 2-competitive.  We present a natural family of First Fit variants, and for optimized parameters get a competitive ratio of approximately 1.48 for sufficiently small vectors.

We improve upon the 1.48 competitive ratio - not via a First Fit variant - and give a competitive ratio arbitrarily close to 4/3 for packing small, two dimensional vectors.  We show that no algorithm can achieve better than a 4/3 competitive ratio for two dimensional vectors, even if one allows the algorithm to split vectors among arbitrarily many bins.

MondayNov 23, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:James R. LeeTitle:Lower bounds on the size of semi-definite programsAbstract:opens in new windowin html    pdfopens in new window

Much of the theory of mathematical programs for combinatorial optimization can be described in the following way:  A polytope of interest has exponentially many (in the dimension) facets, but can be written as the linear projection of a simpler convex body in a higher-dimensional space.  Simple might mean a polytope with a much smaller number of facets, or a spectrahedron (the intersection of an affine subspace with the PSD cone) of small dimension.  This allows one to optimize linear functionals over the base polytope by instead optimizing a lifted functional over the lifted body.

Unless P=NP, one does not expect certain polytopes--like the convex hull of indicators of traveling salesman tours in a graph--to have a small lift.  But it remained open to prove any non-trivial lower bound on the necessary dimension for a spectrahedral lift, i.e. to prove that semi-definite programs do not yield efficient optimization procedures over these polytopes.

We show that the cut, TSP, and stable set polytopes on n-vertex graphs are not the linear image of a spectrahedron of dimension less than exp(n^c) for some constant c > 0. In the process, many interesting phenomena emerge:  Factorization of operators through the PSD cone, quantum information theory, discrete Fourier analysis, and real algebraic geometry.

This is based joint work with Prasad Ragahvendra and David Steurer.

MondayNov 16, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Elette Boyle Title:Is there an Oblivious RAM Lower Bound?Abstract:opens in new windowin html    pdfopens in new window

An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible.

We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky.  While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a "balls and bins" fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data.

We prove that for the OFFLINE case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bit-slicing approach which manipulates the bit representations of data in the simulation.  This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs.

MondayNov 09, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Tsvi Kopelowitz Title:Breaking the Variance: Approximating the Hamming Distance in 1/epsilon Time Per AlignmentAbstract:opens in new windowin html    pdfopens in new window

The algorithmic task of computing the Hamming distance between a given pattern of length m and each location in a text of length n is one of the most fundamental algorithmic tasks in string algorithms. Unfortunately, there is evidence that for a given text and pattern, one cannot compute the exact Hamming distance for all locations in the text in time which is polynomially less than o(n\sqrt m). Nevertheless, Karloff showed that if one is willing to suffer a 1+-\epsilon approximation, then it is possible to solve the problem with high probability in O~(n / \epsilon^2)  time.

Due to related lower bounds for computing the Hamming distance of two strings in the one-way communication complexity model, it is strongly believed that obtaining an algorithm for solving the approximation version cannot be done much faster as a function of 1 / \epsilon. We will show that this belief is false by introducing a new O~(n / \epsilon) time algorithm that succeeds with high probability.

The main idea behind our algorithm, which is common in sparse recovery problems, is to reduce the variance of a specific randomized experiment by (approximately) separating heavy hitters from non-heavy hitters. However, while known sparse recovery techniques work very well on vectors, they do not seem to apply here, where we are dealing with mismatches between pairs of characters. We introduce two main algorithmic ingredients. The first is a new sparse recovery method that applies for pair inputs (such as in our setting). The second is a new construction of hash/projection functions, which allows to count the number of projections that induce mismatches between two characters exponentially faster than brute force. We expect that these algorithmic techniques will be of independent interest.

MondayNov 02, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Noah Stephens-DavidowitzTitle:Solving SVP (and CVP) in 2^n Time via Discrete Gaussian SamplingAbstract:opens in new windowin html    pdfopens in new window

We show a 2^{n+o(n)}-time algorithm for the Shortest Vector Problem on n-dimensional lattices (improving on the previous best-known algorithm of Micciancio and Voulgaris, which runs in time 4^{n+o(n)}). The algorithm uses the elementary yet powerful observation that, by properly combining samples from a Gaussian distribution over the lattice, we can produce exact samples from a narrower Gaussian distribution on the lattice. We use such a procedure repeatedly to obtain samples from an arbitrarily narrow Gaussian distribution over the lattice, allowing us to find a shortest vector.

Both the algorithm and the analysis are quite simple in hindsight. (The main technical tool is an identity on Gaussian measures with a simple geometric proof originally due to Riemann.) If time allows and interest merits, we will discuss a more technical variant of this algorithm that solves the Closest Vector Problem (a seemingly harder problem) in the same asymptotic running time.

Based on joint work with Divesh Aggarwal, Daniel Dadush, and Oded Regev. (See and

MondayOct 26, 201514:30
Foundations of Computer Science SeminarRoom C
Speaker:Adi ShamirTitle:Post-Snowden CryptographyAbstract:opens in new windowin html    pdfopens in new windowPLEASE NOTE UNUSUAL ROOM
Recently, a series of unprecedented leaks by Edward Snowden had made it possible for the first time to get a glimpse into the actual capabilities and limitations of the techniques used by the NSA to eavesdrop to computers and other communication devices. In this talk, I will survey some of the things we have learned in the last couple of years, and discuss possible countermeasures against these capabilities.
WednesdayOct 07, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Elazar Goldenberg Title:Low Distortion Embedding from Edit to Hamming Distance using CouplingAbstract:opens in new windowin html    pdfopens in new window

The Hamming and the edit metrics are two common notions of measuring distances between pairs of strings x,y lying in the Boolean hypercube. The edit distance between x and y is defined as the minimum number of character insertion, deletion, and bit flips needed for converting x into y. Whereas, the Hamming distance between x and y is the number of bit flips needed for converting x to y.

In this paper we study a randomized injective embedding of the edit distance into the Hamming distance with a small distortion. This question was studied by Jowhari (ESA 2012) and is mainly motivated by two questions in communication complexity: the document exchange problem and deciding edit distance using a sketching protocol.

We show a randomized embedding with quadratic distortion. Namely, for any $x,y$ satisfying that their edit distance equals  $k$, the Hamming distance between the embedding of $x$ and $y$ is $O(k^2)$ with high probability. This improves over the distortion ratio of $O(\log n \log^* n)$ obtained by Jowhari for small values of $k$. Moreover, the embedding output size is linear in the input size and the embedding can be computed using a single pass over the input.

WednesdaySep 16, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Anup Rao Title:Simplified Separation of Information and CommunicationAbstract:opens in new windowin html    pdfopens in new window
Motivated by attempts to prove stronger lower bounds in communication complexity, a lot of recent work has tried to answer whether the information complexity of functions can be used to bound their communication complexity. Recently Ganor, Kol and Raz gave an example of a boolean function whose information complexity is exponentially smaller than its communication complexity. We simplify their work. I will give a black board talk highlighting the key ideas in the proof. Joint work with Makrand Sinha.
MondayJul 20, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Retsef Levi Title:Provably Near Optimal Algorithms for Dynamic Assortment Problems Abstract:opens in new windowin html    pdfopens in new window

Assortment planning is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. Given a set of products that are differentiated by price, quality and possibly other attributes, one has to decide on the subset of products and the respective quantities that will be stocked and offered to heterogeneous customers, who exhibit substitution behavior.

The general problem can be shown to be NP-hard to approximate better than a factor linear in the number of products. In this talk we discuss how for  a range of practically interesting special cases, one could design conceptually simple policies that admit provably near-optimal solutions. The analysis reveals interesting structural properties, including hidden submodularity and decomposition properties. 

The talk is based on several papers which are Joint work with Ali Aouad, Vineet Goyal and Danny Segev

MondayJul 06, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Shachar LovettTitle:Structure and Pseudo-Randomness in Coding TheoryAbstract:opens in new windowin html    pdfopens in new window

The theory of structure and pseudo-randomness has been very influential in several areas of mathematics, such as number theory, graph theory and harmonic analysis. It is also been influential in theoretical computer science, with applications in complexity theory, cryptography and property testing. At a high level, it allows to analyze arbitrary objects by decomposing them to a "structural" component and a "pseudo-random" component. The pseudo-random component behaves in many ways like random noise, while the structural component has a concise representation which makes it amenable to analysis and algorithmic manipulation.

In this talk, I will describe applications of this paradigm to coding theory. I will describe a new general approach to list decoding, which follows by decomposing an arbitrary received word to a structural received word and pseudo-random noise. This allows for a simplified analysis of the list decoding problem. In particular, I will describe how this approach leads to a resolution of a conjecture by Gopalan, Klivans and Zuckerman [STOC 2008], that the list decoding radius of Reed-Muller codes (in certain regimes) is equal to the minimal distance of the code.

Based on joint work with Abhishek Bhowmick.

MondayJun 29, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Amir Yehudayoff Title:Learning and compressionAbstract:opens in new windowin html    pdfopens in new window

There are at least 2 aspects to learning: predicting the outcome of unseen events, and finding simple explanations of observed systems. We shall discuss 2 formal abstractions of these aspects: PAC learning and sample compression schemes. We shall start with an introduction to these notions, and then discuss the equivalence between them.

Based on a joint project with Shay Moran, Amir Shpilka and Avi Wigderson.

MondayJun 22, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Michael DinitzTitle:Smoothed Analysis of Dynamic NetworksAbstract:opens in new windowin html    pdfopens in new window

We generalize the technique of smoothed analysis to distributed algorithms in dynamic network models.  Whereas standard smoothed analysis studies the impact of small random perturbations of input values on algorithm performance metrics, dynamic graph smoothed analysis studies the impact of random perturbations of the underlying changing network graph topologies.  Similar to the original application of smoothed analysis, our goal is to study whether known strong lower bounds in dynamic network models are robust or fragile: do they withstand small (random) perturbations, or do such deviations push the graphs far enough from a  precise pathological instance to enable much better performance?  Fragile lower bounds are likely not relevant for real-world deployment, while robust lower bounds represent a true difficulty caused by dynamic behavior.  We apply this technique to three standard dynamic network problems with known strong worst-case lower bounds: random walks, flooding, and aggregation.  We prove that these bounds provide a spectrum of robustness when subjected to smoothing---some are extremely fragile (random walks), some are moderately fragile / robust (flooding), and some are extremely robust (aggregation).

Joint work with Jeremy Fineman (Georgetown), Seth Gilbert (National University of Singapore), and Calvin Newport (Georgetown).

MondayJun 08, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Shai VardiTitle:Expanding the Boundaries of Local Computation AlgorithmsAbstract:opens in new windowin html    pdfopens in new window

Given an input $x$, and a search problem $F$, local computation algorithms (LCAs) implement access to specified locations of $y$ in a legal output $y \in F(x)$, using polylogarithmic time and space. Previous work on LCAs restricted its focus to graphs of bounded degree, or degree of bounded expectation that is distributed binomially.

Using a new palette of techniques, we show that it is possible to obtain LCAs for maximal independent set (MIS) and maximal matching (MM) on trees with degree bounded by $\polylog{n}$. Our result immediately extends to all graphs with degree bounded by $\polylog{n}$, as long as they do not contain short cycles (of length $\polylog{n}$).

We define a family of graphs called $d$-light graphs, and show how to convert a large class of online algorithms (including MIS and MM) to LCAs on these graphs. We then show that applying the MIS (or MM) LCA on appropriately selected $d$-light subgraphs, allows us to quickly address all of the vertices of the $\polylog{n}$-degree graph.

In addition to expanding the range of LCAs to graphs of polylogarithmic degree, our new techniques also significantly improve the running times and space requirements, expand the family of graphs, and better define the family of algorithms to which previous results apply. Furthermore our proofs are simpler and more concise than the previous proof methods.

Joint work with Omer Reingold.

ThursdayMay 28, 201513:30
Foundations of Computer Science SeminarRoom 1
Speaker:Boaz BarakTitle:Proofs for Algorithms, Algorithms from ProofsAbstract:opens in new windowin html    pdfopens in new window

The interplay between algorithms and proofs has been one of the most fascinating and fruitful themes in theoretical  Computer Science. In this talk I will describe a different connection between these two concepts - a general methodology for algorithm design using an automatic transformation of a proof into an algorithm. This methodology yields a systematic way to design and analyze algorithms across many domains. In particular we and others have used it for problems in combinatorial optimization, machine learning, and quantum information theory, and it shows promise for making progress on important open questions such as settling Khot's Unique Games Conjecture. I will demonstrate this methodology by presenting an algorithm for the Sparse Coding / Dictionary Learning problem that handles much more general inputs than was known before, and an algorithm for the Unique Games problem that can solve all the previously known candidate hard instances.   Key to our approach is the notion of "pseudo-distributions", which are objects that are emphatically different than actual distributions but behave like them in the eyes of low degree polynomials. We use these pseudo-distributions to "lift" a proof into an algorithm via the Shor-Parrilo-Lasserre "Sum-of-Squares" semidefinite programming hierarchy. The talk will be based on several joint works with (varying subsets of) Fernando Brandao, Aram Harrow, Jonathan Kelner, David Steurer and Yuan Zhou.

MondayMay 18, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Sigal OrenTitle:Long-Range Planning with Time-Inconsistency: A Class of Computational Problems in Behavioral EconomicsAbstract:opens in new windowin html    pdfopens in new window

In many settings, people exhibit behavior that is inconsistent across time — we allocate a block of time to get work done and then procrastinate, or put effort into a project and then later fail to complete it. An active line of research in behavioral economics and related fields has developed and analyzed models for this type of time-inconsistent behavior.

Here we propose a graph-theoretic model of tasks and goals, in which dependencies among actions are represented by a directed graph, and a time-inconsistent agent constructs a path through this graph. We first show how instances of this path-finding problem on different input graphs can reconstruct a wide range of qualitative phenomena observed in the literature on time-inconsistency, including procrastination, abandonment of long-range tasks, and the benefits of reduced sets of choices. We then explore a set of analyses that quantify over the set of all graphs; among other results, we find that in any graph, there can be only polynomially many distinct forms of time-inconsistent behavior; and any graph in which a time-inconsistent agent incurs significantly more cost than an optimal agent must contain a large “procrastination” structure as a minor. Finally, we use this graph-theoretic model to explore ways in which tasks can be designed to help motivate agents to reach designated goals.

This is joint work with Jon Kleinberg.

MondayMay 11, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Aviad RubinsteinTitle:New hardness results (for Densest k-Subgraph and Nash) from "birthday repetition"Abstract:opens in new windowin html    pdfopens in new window

I will describe recent applications of "birthday repetition" that can give (conditional) quasi-polynomial time hardness results for Densest k-Subgraph and for $\epsilon$-Nash in 2-player games. (For the latter result we use a non-standard "PCP for PPAD" assumption which I will also discuss.) Both results are tight by [FS97], [LMM03], [Barman15].

Based on:
(joint work with Mark Braverman, Young Kun Ko, and Omri Weinstein)

(joint work with Yakov Babichenko and Christos Papadimitriou)

MondayMay 04, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Eli GafniTitle:40 Years Of Distributed-Computability on One-FootAbstract:opens in new windowin html    pdfopens in new window

Research can be viewed as a search of an accepting-state in an exponential-space. When we look in hind-sight after we find an accepting-state, can we identify a much shorter path than the one that was discovered by trial and error as the search actually proceeded?

In this talk I'll show that this is the case for Distributed-Computability: The discovery that different distributed problems have different levels of difficulty, and identifying the weakest model of distributed-computation that allows to solve a problem. I'll explain the essence of 40 years of research in an hour, by showing that if the right questions were asked at the right time, all the results could have been had in a span of time order-of-magnitude shorter.

Some of the major ideas in the talk were developed in works with Afek (TAU), and Borowsky (Akamai), Lynch (MIT), and Rajsbaum (UNAM).

MondayApr 27, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Irit DinurTitle:Old and new PCP constructions Abstract:opens in new windowin html    pdfopens in new window

The PCP theorem (AS,ALMSS 1991) guarantees that every NP language has a Probabilistically Checkable Proof (PCP) system allowing a verifier to check a witness very efficiently using randomness, and allowing for small error.
Most of the talk will not assume prior knowledge, but I will also devote some time to some recent work joint with Harsha and Kindler.

In this work we make (some) progress towards proving the so-called "sliding-scale conjecture". This is a conjecture of BGLR from 1993 about the tradeoff between the number of bits read from the PCP proof and the error of the verifier.
Our work revisits older constructions and analyzes them using the more modern "modular-composition" approach.

Based on joint work with Prahladh Harsha and Guy Kindler.

MondayApr 20, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Nir AilonTitle:Computational Lower Bounds for the Fourier Transform Abstract:opens in new windowin html    pdfopens in new window
The Fourier Transform is one of the most important linear transformation in engineering and science with applications in a wide variety of domains spanning signal processing, pattern recognition, cryptography, polynomial multiplication, complexity theory, learning theory and more. In 1964, Cooley and Tukey discovered the Fast Fourier Transform (FFT) algorithm, which computes the transformation in time O(n log n) for a signal in n-dimensions. In spite of its importance, not much was known until recently about lower bounds. I will show that if you speed up FFT in a machine of finite precision (say, 32 or 64 bits) using linear operations only (multiplications and additions), then there exist Omega (n) orthogonal directions in input space that either overflow (exceed the machine's numerical upper limit) or underflow (exceed the machine's precision) at some point during the computation. A quantitative tradeoff between the speedup factor and the resulting numerical abuse will be presented. The talk is self-contained. Some open problems will be presented at the end.
MondayApr 13, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Elette Boyle Title:Function Secret SharingAbstract:opens in new windowin html    pdfopens in new window

Motivated by the goal of securely searching and updating distributed data, we introduce the notion of function secret sharing (FSS), a form of “additive secret sharing” for {\em functions} f: {0,1}^n → G, where G is an abelian group.

An m-party FSS scheme for function class F allows one to split any function f from F into m succinctly described functions f_i, such that: (1) for every input x, f(x) is equal to the sum of evaluations \sum_i f_i(x), and (2) any strict subset of "share functions" f_i hides f. FSS provides a natural generalization of distributed point functions, as introduced by (Gilboa-Ishai Eurocrypt 2014), which coincide with the special case of two parties and the class F of point functions (which evaluate to 0 at all but one point).

We present two types of results:
- We obtain efficiency improvements and extensions of the original distributed point function construction.
- We then initiate a systematic study of general FSS, providing constructions for richer function classes, and establishing relations with other cryptographic primitives.

Joint work with Niv Gilboa and Yuval Ishai.

MondayMar 30, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Nir BitanskyTitle:On the Cryptographic Hardness of Finding a Nash Equilibrium Abstract:opens in new windowin html    pdfopens in new window
We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and injective one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is known to be complete. Previous proposals for basing PPAD-hardness on program obfuscation considered a strong "virtual black-box" notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps. Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far. Joint work with Omer Paneth and Alon Rosen (
MondayMar 23, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Gil CohenTitle:Local Correlation Breakers and Applications to Mergers and Multi-Source ExtractorsAbstract:opens in new windowin html    pdfopens in new window

We introduce and construct a pseudo-random object which we call a local correlation breaker (LCB). This is an algorithm that gets as input a sequence of (possibly correlated) random variables and an independent weak source of randomness. The output of the LCB is a sequence of random variables with the following property. If the i'th input random variable is uniform then the i'th output variable is uniform even if a bounded number of any other output variables are given. That is, an LCB uses the weak-source to "break" local correlations between random variables.

Based on LCBs we obtain improved constructions of mergers with weak-seeds and multi-source extractors. In particular, we construct a 3-source extractor for entropies delta*n, O(log n) and O(loglog n), for any constant delta. We further construct a 7-source extractor for poly-logarithmic entropy.

Joint work with Guy Rothblum.
No prior knowledge is assumed.

MondayMar 09, 201514:30
Foundations of Computer Science SeminarRoom 1
Speaker:Isaac KeslassyTitle:When Bh Sequences Meet Bloom Filters, and Hot Topics in Data CentersAbstract:opens in new windowin html    pdfopens in new window

Bloom filters and Counting Bloom Filters (CBFs) are widely used in networking device algorithms. They implement fast set representations to support membership queries with limited error. Unlike Bloom filters, CBFs also support element deletions. In the first part of the talk, I will introduce a new general method based on variable increments to improve the efficiency of CBFs and their variants. I will demonstrate that this method can always achieve a lower false positive rate and a lower overflow probability bound than CBFs in practical systems.

Next, I will present ongoing research on data center networking. In particular, I will introduce a new approach to providing network isolation, so customers can feel alone in shared clouds, without any network contention from other customers. I will also demonstrate theoretical conditions for the isolation algorithm.

MondayFeb 02, 201515:00
Foundations of Computer Science SeminarRoom 261
Speaker:Sangxia Huang Title:Improved NP-inapproximability for 2-variable Linear EquationsAbstract:opens in new windowin html    pdfopens in new window

An instance of the E2-Lin(2) problem is a system of equations of the form "x_i + x_j = b (mod 2)". Given such a system in which it is possible to satisfy all but an epsilon fraction of the equations, we would like to find an assignment that violates as few equations as possible. In this paper, we show that it is NP-hard to satisfy all but a C*epsilon fraction of the equations, for any C< 11/8 and 0 < epsilon <= 1/8. Our result holds also for the special case of Max-Cut. The previous best NP-hardness result, standing for over 15 years, had 5/4 in place of 11/8.

Our proof is by a modified gadget reduction from a predicate that supports a pairwise independent distribution. We also show an inherent limitation to this type of gadget reduction. In particular, we show that no such reduction can establish a hardness factor C greater than ~2.54.

Joint work with Johan Hastad, Rajsekar Manokaran, Ryan O'Donnell, John Wright.

MondayJan 26, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Avinatan HassidimTitle:Random Assignment gamesAbstract:opens in new windowin html    pdfopens in new window
Assignment games represent a tractable yet versatile model of two-sided markets with transfers. We study the likely properties of the core of randomly generated assignment games. If the joint productivities of every firm and worker are i.i.d bounded random variables, then with high probability all workers are paid roughly equal wages, and all firms make similar profits. This implies that core allocations vary significantly in balanced markets, but that there is core convergence in even slightly unbalanced markets. For the benchmark case of uniform distribution, we provide a tight bound for the workers' share of the surplus under the firm-optimal core allocation. We present simulation results suggesting that the phenomena analyzed appear even in medium-sized markets. Finally, we briefly discuss the effects of unbounded distributions and the ways in which they may affect wage dispersion. Joint work with Assaf Romm.
MondayJan 05, 201514:30
Foundations of Computer Science SeminarRoom 261
Speaker:Dana MoshkovitzTitle:Parallel Repetition From FortificationAbstract:opens in new windowin html    pdfopens in new window
The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games -- "fortification" -- and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and (very) short. As corollaries, we obtain: (1) Starting from a PCP Theorem with soundness error bounded away from 1, we get a PCP with arbitrarily small constant soundness error. In particular, starting with the combinatorial PCP of Dinur, we get a combinatorial PCP with low error. The latter can be used for hardness of approximation as in the work of Hastad. (2) Starting from the work of the author and Raz, we get a projection PCP theorem with the smallest soundness error known today. The theorem yields nearly a quadratic improvement in the size compared to previous work.
MondayDec 29, 201414:30
Foundations of Computer Science SeminarRoom 261
Speaker:Moran FeldmanTitle:Aspects of Submodular Maximization Subject to a Matroid ConstraintAbstract:opens in new windowin html    pdfopens in new window
Submodular functions form a large natural class of set functions with applications in many fields including social networks, machine learning and game theory. Optimization of submodular functions subject to various constraints attracted much attention in recent years, both from theoretical and practical points of view. This talk considers the problem of maximizing a submodular function subject to a matroid constraint, which is a central problem demonstrating many of the modern approaches and techniques used in the field of submodular maximization. Many aspects of this problem have been studied, including its polynomial time approximability, fast (almost linear time) algorithms and online models. This talk surveys some of these aspects and explores a few of the main results obtained recently