Publications
2020

(2020) Switzerland: . Abstract
Deviceindependent quantum cryptography is a method for exchanging secret messages over potentially insecure quantum communication channels, such as optical fibers. In contrast to conventional quantum cryptography, security is guaranteed even if the devices used by the communication partners, such as photon sources and detectors, deviate from their theoretical specifications. This is of high practical relevance, for attacks to current implementations of quantum cryptography exploit exactly such deviations. Deviceindependent cryptography is however technologically so demanding that it looked as if experimental realizations are out of reach.In her thesis, Rotem ArnonFriedman presents powerful informationtheoretic methods to prove the security of deviceindependent quantum cryptography. Based on them, she is able to establish security in a parameter regime that may be experimentally achievable in the near future. Rotem ArnonFriedman's thesis thus provides the theoretical foundations for an experimental demonstration of deviceindependent quantum cryptography.

Deviceindependent quantum key distribution from computational assumptions(2020) arXiv. Abstract
In deviceindependent quantum key distribution (DIQKD), an adversary prepares a device consisting of two components, distributed to Alice and Bob, who use the device to generate a secure key. The security of existing DIQKD schemes holds under the assumption that the two components of the device cannot communicate with one another during the protocol execution. However, in practice, entanglement is distributed "on the fly" via a quantum channel connecting the two components, and Alice and Bob are often not far enough from each other to enforce noncommunication by the laws of special relativity. In this case, the locality loophole is open and security is not guaranteed. Here, we introduce a new setting and protocol for DIQKD in which the locality loophole plays no role: under a standard computational assumption from postquantum cryptography, keys generated by our protocol are secure even when the components of an adversarial device can exchange arbitrary quantum communication. Importantly, the computational assumption only needs to hold during the protocol executionthe keys generated at the end of the protocol are informationtheoretically secure. Thus, our protocol generates keys with the same level of security as standard DIQKD protocols without requiring the experimental ability to close the locality loophole.

(2020) IEEE Journal on Selected Areas in Information Theory. Abstract
Secret and perfect randomness is an essential resource in cryptography. Yet, it is not even clear that such exists. It is well known that the tools of classical computer science do not allow us to create secret and perfect randomness from a single weak public source. Quantum physics, on the other hand, allows for such a process, even in the most paranoid cryptographic sense termed “deviceindependent quantum cryptography”. We propose and prove the security of a new deviceindependent protocol that takes any single public SanthaVazirani source as input and creates a secret close to uniform string in the presence of a quantum adversary. Our work is the first to achieve randomness amplification with all the following properties: (1) amplification and “privatization” of a public SanthaVazirani source with arbitrary bias (2) the use of a device with only two components (3) nonvanishing extraction rate and (4) maximal noise tolerance. In particular, this implies that our protocol is the first protocol that can possibly be implemented with reachable parameters. We achieve these by combining three new tools: a particular family of Bell inequalities, a proof technique to lower bound entropy in the deviceindependent setting, and a framework for quantumproof multisource extractors.
Won the Best Student Paper Award, QCrypt 2017
Related talks: QCrypt; TCS Seminar Princeton 
Upper bounds on deviceindependent quantum key distribution rates and a revised Peres conjecture(2020) arXiv. Abstract
Deviceindependent quantum key distribution (DIQKD) is one of the most challenging tasks in quantum cryptography. The protocols and their security are based on the existence of Bell inequalities and the ability to violate them by measuring entangled states. We study the entanglement needed for DIQKD protocols in two different ways. Our first contribution is the derivation of upper bounds on the key rates of CHSHbased DIQKD protocols in terms of the violation of the inequality; this sets an upper limit on the possible DI key extraction rate from states with a given violation. Our upper bound improves on the previously known bound of Kaur et al. Our second contribution is the initiation of the study of the role of bound entangled states in DIQKD. We present a revised Peres conjecture stating that such states cannot be used as a resource for DIQKD. We give a first piece of evidence for the conjecture by showing that the bound entangled state found by Vertesi and Brunner, even though it can certify DI randomness, cannot be used to produce a key using protocols analogous to the wellstudied CHSHbased DIQKD protocol.
Related talks: Beyond IID 2020; Beyond IID 2020 Open Questions (given by Felix Leditzky)
2019

(2019) New Journal of Physics. 21, 033010. Abstract
Entanglement sources that produce many entangled states act as a main component in applications exploiting quantum physics such as quantum communication and cryptography. Realistic sources are inherently noisy, cannot run for an infinitely long time, and do not necessarily behave in an independent and identically distributed manner. An important question then ariseshow can one test, or certify, that a realistic source produces high amounts of entanglement? Crucially, a meaningful and operational solution should allow us to certify the entanglement which is available for further applications after performing the test itself (in contrast to assuming the availability of an additional source which can produce more entangled states, identical to those which were tested). To answer the above question and lower bound the amount of entanglement produced by an uncharacterised source, we present a protocol that can be run by interacting classically with uncharacterised (but not entangled to one another) measurement devices used to measure the states produced by the source. A successful run of the protocol implies that the remaining quantum state has high amounts of oneshot distillable entanglement. That is, one can distill many maximally entangled states out of the single remaining state. Importantly, our protocol can tolerate noise and, thus, certify entanglement produced by realistic sources. With the above properties, the protocol acts as the first 'operational deviceindependent entanglement certification protocol' and allows one to test and benchmark uncharacterised entanglement sources which may be otherwise incomparable.

(2019) SIAM Journal on Computing. 48, 1, p. 181225 Abstract
Deviceindependent security is the gold standard for quantum cryptography: not only is security based entirely on the laws of quantum mechanics, but it holds irrespective of any a priori assumptions on the quantum devices used in a protocol, making it particularly applicable in a quantumwary environment. While the existence of deviceindependent protocols for tasks such as randomness expansion and quantum key distribution has recently been established, the underlying proofs of security remain very challenging, yield rather poor key rates, and demand very high quality quantum devices, thus making them all but impossible to implement in practice. We introduce a technique for the analysis of deviceindependent cryptographic protocols. We provide a flexible protocol and give a security proof that provides quantitative bounds that are asymptotically tight, even in the presence of general quantum adversaries. At a high level our approach amounts to establishing a reduction to the scenario in which the untrusted device operates in an identical and independent way in each round of the protocol. This is achieved by leveraging the sequential nature of the protocol and makes use of a newly developed tool, the "entropy accumulation theorem" of Dupuis, Fawzi, and Renner [Entropy Accumulation, preprint, 2016]. As concrete applications we give simple and modular security proofs for deviceindependent quantum key distribution and randomness expansion protocols based on the CHSH inequality. For both tasks, we establish essentially optimal asymptotic key rates and noise tolerance. In view of recent experimental progress, which has culminated in loopholefree Bell tests, it is likely that these protocols can be practically implemented in the near future.
2018

Reductions to IID in Deviceindependent Quantum Information Processing(2018) arXiv. Abstract
The field of deviceindependent (DI) quantum information processing concerns itself with devising and analysing protocols, such as quantum key distribution, without referring to the quality of the physical devices utilised to execute the protocols. Instead, the analysis is based on the observed correlations that arise during a repeated interaction with the devices and, in particular, their ability to violate the so called Bell inequalities. Since the analysis of DI protocols holds irrespectively of the underlying physical device, it implies that any device can be used to execute the protocols: If the apparatus is of poor quality, the users of the protocol will detect it and abort; otherwise, they will accomplish their goal. This strong statement comes at a price the analysis of DI protocols is, a priori, extremely challenging. The thesis presents an approach that can be taken to simplify the analysis of DI information processing protocols. The idea is the following: Instead of analysing the most general device leading to the observed correlations, one should first analyse a significantly simpler device that, in each interaction with the user, behaves in an identical way, independently of all other interactions. We call such a device an independently and identically distributed (IID) device. As the next step, special techniques are used to prove that, without loss of generality, the analysis of the IID device implies similar results for the most general device. These are termed reductions to IID. We present two mathematical techniques that can be used as reductions to IID in the DI setting, each accompanied by a showcaseapplication that exemplifies the reduction's usage and benefits. Performing the analysis via a reduction to IID leads to simpler proofs and significant quantitive improvements, matching the tight results proven when analysing IID devices.
Won the ETH Medal Award for outstanding doctoral thesis, 2019

(2018) 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018. Kaklamanis C., Marx D., Sannella D. & Chatzigiannakis I.(eds.). Abstract
In this work we construct tests that allow a classical user to certify high dimensional entanglement in uncharacterized and possibly noisy quantum devices. We present a family of nonlocal games (Gn) that for all n certify states with entanglement of formation (n). These tests can be derived from any bipartite nonlocal game with a classicalquantum gap. Furthermore, our tests are noisetolerant in the sense that fault tolerant technologies are not needed to play the games; entanglement distributed over noisy channels can pass with high probability, making our tests relevant for realistic experimental settings. This is in contrast to, e.g., results on selftesting of high dimensional entanglement, which are only relevant when the noise rate goes to zero with the system's size n. As a corollary of our result, we supply a lowerbound on the entanglement cost of any state achieving a quantum advantage in a bipartite nonlocal game. Our proof techniques heavily rely on ideas from the work on classical and quantum parallel repetition theorems.

(2018) Nature Communications. 9, 459. Abstract
Deviceindependent cryptography goes beyond conventional quantum cryptography by providing security that holds independently of the quality of the underlying physical devices. Deviceindependent protocols are based on the quantum phenomena of nonlocality and the violation of Bell inequalities. This high level of security could so far only be established under conditions which are not achievable experimentally. Here we present a property of entropy, termed "entropy accumulation", which asserts that the total amount of entropy of a large system is the sum of its parts. We use this property to prove the security of cryptographic protocols, including deviceindependent quantum key distribution, while achieving essentially optimal parameters. Recent experimental progress, which enabled loopholefree Bell tests, suggests that the achieved parameters are technologically accessible. Our work hence provides the theoretical groundwork for experimental demonstrations of deviceindependent cryptography.
2016

(2016) 11th Conference on the Theory of Quantum Computation, Communication and Cryptography, TQC 2016. Broadbent A.(eds.). Abstract
Randomness extractors, widely used in classical and quantum cryptography and other fields of computer science, e.g., derandomization, are functions which generate almost uniform randomness from weak sources of randomness. In the quantum setting one must take into account the quantum side information held by an adversary which might be used to break the security of the extractor. In the case of seeded extractors the presence of quantum side information has been extensively studied. For multisource extractors one can easily see that high conditional minentropy is not sufficient to guarantee security against arbitrary side information, even in the classical case. Hence, the interesting question is under which models of (both quantum and classical) side information multisource extractors remain secure. In this work we suggest a natural model of side information, which we call the Markov model, and prove that any multisource extractor remains secure in the presence of quantum side information of this type (albeit with weaker parameters). This improves on previous results in which more restricted models were considered or the security of only some types of extractors was shown.

(2016) IEEE Transactions on Information Theory. 62, 3, p. 14401457 Abstract
In the context of multiplayer games, the parallel repetition problem can be phrased as follows: given a game G with optimal winning probability 1  alpha and its repeated version Gn (in which n games are played together, in parallel), can the players use strategies that are substantially better than ones in which each game is played independently? This question is relevant in physics for the study of correlations and plays an important role in computer science in the context of complexity and cryptography. In this paper, the case of multiplayer nonsignaling games is considered, i.e., the only restriction on the players is that they are not allowed to communicate during the game. For completesupport games (games where all possible combinations of questions have nonzero probability to be asked) with any number of players, we prove a threshold theorem stating that the probability that nonsignaling players win more than a fraction 1alpha+beta of the n games is exponentially small in n beta(2) for every 0
Related talks: Trustworthy quantum information
2015

(2015) Journal of Mathematical Physics. 56, 5, 052203. Abstract
When analysing quantum information processing protocols, one has to deal with large entangled systems, each consisting of many subsystems. To make this analysis feasible, it is often necessary to identify some additional structures. de Finetti theorems provide such a structure for the case where certain symmetries hold. More precisely, they relate states that are invariant under permutations of subsystems to states in which the subsystems are independent of each other. This relation plays an important role in various areas, e.g., in quantum cryptography or state tomography, where permutation invariant systems are ubiquitous. The known de Finetti theorems usually refer to the internal quantum state of a system and depend on its dimension. Here, we prove a different de Finetti theorem where systems are modelled in terms of their statistics under measurements. This is necessary for a large class of applications widely considered today, such as device independent protocols, where the underlying systems and the dimensions are unknown and the entire analysis is based on the observed correlations. (C) 2015 Author(s).
Related talks: Trustworthy quantum information
2012

(2012) Physical Review A. 86, 6, 062333. Abstract
The task of privacy amplification, in which Alice holds some partially secret information with respect to an adversary Eve and wishes to distill it until it is completely secret, is known to be solvable almost optimally in both the classical and quantum worlds. Unfortunately, when considering an adversary who is limited only by nonsignaling constraints such a statement cannot be made in general. We here consider systems which violate the chained Bell inequality and prove that under the natural assumptions of a timeordered nonsignaling system, which allow past subsystems to signal future subsystems (using the device's memory for example), superpolynomial privacy amplification by any hashing is impossible. This is of great relevance when considering practical deviceindependent keydistribution protocols which assume a superquantum adversary. DOI: 10.1103/PhysRevA.86.062333

Towards the Impossibility of NonSignalling Privacy Amplification from TimeLike Ordering Constraints(2012) arXiv. Abstract
In the past few years there was a growing interest in proving the security of cryptographic protocols, such as key distribution protocols, from the sole assumption that the systems of Alice and Bob cannot signal to each other. This can be achieved by making sure that Alice and Bob perform their measurements in a spacelike separated way (and therefore signalling is impossible according to the nonsignalling postulate of relativity theory) or even by shielding their apparatus. Unfortunately, it was proven in [E. Haenggi, R. Renner, and S. Wolf. The impossibility of nonsignaling privacy amplification] that, no matter what hash function we use, privacy amplification is impossible if we only impose nonsignalling conditions between Alice and Bob and not within their systems. In this letter we reduce the gap between the assumptions of Haenggi et al. and the physical relevant assumptions, from an experimental point of view, which say that the systems can only signal forward in time within the systems of Alice and Bob. We consider a set of assumptions which is very close to the conditions above and prove that the impossibility result of Haenggi et al. still holds.