(2021) IEEE transactions on information theory / Professional Technical Group on Information Theory.. 67, 10, p. 6606-6618 Abstract
Device-independent quantum key distribution (DIQKD) is one of the most challenging tasks in quantum cryptography. The protocols and their security are based on the existence of Bell inequalities and the ability to violate them by measuring entangled states. We study the entanglement needed for DIQKD protocols in two different ways. Our first contribution is the derivation of upper bounds on the key rates of CHSH-based DIQKD protocols in terms of the violation of the inequality; this sets an upper limit on the possible DI key extraction rate from states with a given violation. Our upper bound improves on the previously known bound of Kaur et al. Our second contribution is the initiation of the study of the role of bound entangled states in DIQKD. We present a revised Peres conjecture stating that such states cannot be used as a resource for DIQKD. We give a first piece of evidence for the conjecture by showing that the bound entangled state found by Vertesi and Brunner, even though it can certify DI randomness, cannot be used to produce a key using protocols analogous to the well-studied CHSH-based DIQKD protocol.
(2021) New journal of physics.. Abstract
In device-independent quantum key distribution (DIQKD), an adversary prepares a device consisting of two components, distributed to Alice and Bob, who use the device to generate a secure key. The security of existing DIQKD schemes holds under the assumption that the two components of the device cannot communicate with one another during the protocol execution. This is called the locality assumption in DIQKD. Here, we show how to replace this assumption, which can be hard to enforce in practice, by a standard computational assumption from post-quantum cryptography: we give a protocol that produces secure keys even when the components of an adversarial device can exchange arbitrary quantum communication, assuming the device is computationally bounded. Importantly, the computational assumption only needs to hold during the protocol execution---the keys generated at the end of the protocol are information-theoretically secure as in standard DIQKD protocols.
(2020) Switzerland: . Abstract
Device-independent quantum cryptography is a method for exchanging secret messages over potentially insecure quantum communication channels, such as optical fibers. In contrast to conventional quantum cryptography, security is guaranteed even if the devices used by the communication partners, such as photon sources and detectors, deviate from their theoretical specifications. This is of high practical relevance, for attacks to current implementations of quantum cryptography exploit exactly such deviations. Device-independent cryptography is however technologically so demanding that it looked as if experimental realizations are out of reach.In her thesis, Rotem Arnon-Friedman presents powerful information-theoretic methods to prove the security of device-independent quantum cryptography. Based on them, she is able to establish security in a parameter regime that may be experimentally achievable in the near future. Rotem Arnon-Friedman's thesis thus provides the theoretical foundations for an experimental demonstration of device-independent quantum cryptography.
(2020) IEEE Journal on Selected Areas in Information Theory. 1, 2, Abstract
Secret and perfect randomness is an essential resource in cryptography. Yet, it is not even clear that such exists. It is well known that the tools of classical computer science do not allow us to create secret and perfect randomness from a single weak public source. Quantum physics, on the other hand, allows for such a process, even in the most paranoid cryptographic sense termed “device-independent quantum cryptography”. We propose and prove the security of a new device-independent protocol that takes any single public Santha-Vazirani source as input and creates a secret close to uniform string in the presence of a quantum adversary. Our work is the first to achieve randomness amplification with all the following properties: (1) amplification and “privatization” of a public Santha-Vazirani source with arbitrary bias (2) the use of a device with only two components (3) non-vanishing extraction rate and (4) maximal noise tolerance. In particular, this implies that our protocol is the first protocol that can possibly be implemented with reachable parameters. We achieve these by combining three new tools: a particular family of Bell inequalities, a proof technique to lower bound entropy in the device-independent setting, and a framework for quantum-proof multi-source extractors.
(2019) New Journal of Physics. 21, 033010. Abstract
Entanglement sources that produce many entangled states act as a main component in applications exploiting quantum physics such as quantum communication and cryptography. Realistic sources are inherently noisy, cannot run for an infinitely long time, and do not necessarily behave in an independent and identically distributed manner. An important question then arises-how can one test, or certify, that a realistic source produces high amounts of entanglement? Crucially, a meaningful and operational solution should allow us to certify the entanglement which is available for further applications after performing the test itself (in contrast to assuming the availability of an additional source which can produce more entangled states, identical to those which were tested). To answer the above question and lower bound the amount of entanglement produced by an uncharacterised source, we present a protocol that can be run by interacting classically with uncharacterised (but not entangled to one another) measurement devices used to measure the states produced by the source. A successful run of the protocol implies that the remaining quantum state has high amounts of one-shot distillable entanglement. That is, one can distill many maximally entangled states out of the single remaining state. Importantly, our protocol can tolerate noise and, thus, certify entanglement produced by realistic sources. With the above properties, the protocol acts as the first 'operational device-independent entanglement certification protocol' and allows one to test and benchmark uncharacterised entanglement sources which may be otherwise incomparable.
(2019) SIAM Journal on Computing. 48, 1, p. 181-225 Abstract
Device-independent security is the gold standard for quantum cryptography: not only is security based entirely on the laws of quantum mechanics, but it holds irrespective of any a priori assumptions on the quantum devices used in a protocol, making it particularly applicable in a quantum-wary environment. While the existence of device-independent protocols for tasks such as randomness expansion and quantum key distribution has recently been established, the underlying proofs of security remain very challenging, yield rather poor key rates, and demand very high quality quantum devices, thus making them all but impossible to implement in practice. We introduce a technique for the analysis of device-independent cryptographic protocols. We provide a flexible protocol and give a security proof that provides quantitative bounds that are asymptotically tight, even in the presence of general quantum adversaries. At a high level our approach amounts to establishing a reduction to the scenario in which the untrusted device operates in an identical and independent way in each round of the protocol. This is achieved by leveraging the sequential nature of the protocol and makes use of a newly developed tool, the "entropy accumulation theorem" of Dupuis, Fawzi, and Renner [Entropy Accumulation, preprint, 2016]. As concrete applications we give simple and modular security proofs for device-independent quantum key distribution and randomness expansion protocols based on the CHSH inequality. For both tasks, we establish essentially optimal asymptotic key rates and noise tolerance. In view of recent experimental progress, which has culminated in loophole-free Bell tests, it is likely that these protocols can be practically implemented in the near future.
Reductions to IID in Device-independent Quantum Information Processing(2018) arXiv. Abstract
The field of device-independent (DI) quantum information processing concerns itself with devising and analysing protocols, such as quantum key distribution, without referring to the quality of the physical devices utilised to execute the protocols. Instead, the analysis is based on the observed correlations that arise during a repeated interaction with the devices and, in particular, their ability to violate the so called Bell inequalities. Since the analysis of DI protocols holds irrespectively of the underlying physical device, it implies that any device can be used to execute the protocols: If the apparatus is of poor quality, the users of the protocol will detect it and abort; otherwise, they will accomplish their goal. This strong statement comes at a price-- the analysis of DI protocols is, a priori, extremely challenging. The thesis presents an approach that can be taken to simplify the analysis of DI information processing protocols. The idea is the following: Instead of analysing the most general device leading to the observed correlations, one should first analyse a significantly simpler device that, in each interaction with the user, behaves in an identical way, independently of all other interactions. We call such a device an independently and identically distributed (IID) device. As the next step, special techniques are used to prove that, without loss of generality, the analysis of the IID device implies similar results for the most general device. These are termed reductions to IID. We present two mathematical techniques that can be used as reductions to IID in the DI setting, each accompanied by a showcase-application that exemplifies the reduction's usage and benefits. Performing the analysis via a reduction to IID leads to simpler proofs and significant quantitive improvements, matching the tight results proven when analysing IID devices.
Won the ETH Medal Award for outstanding doctoral thesis, 2019
(2018) 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018. Kaklamanis C., Marx D., Sannella D. & Chatzigiannakis I.(eds.). Abstract
In this work we construct tests that allow a classical user to certify high dimensional entanglement in uncharacterized and possibly noisy quantum devices. We present a family of non-local games (Gn) that for all n certify states with entanglement of formation (n). These tests can be derived from any bipartite non-local game with a classical-quantum gap. Furthermore, our tests are noise-tolerant in the sense that fault tolerant technologies are not needed to play the games; entanglement distributed over noisy channels can pass with high probability, making our tests relevant for realistic experimental settings. This is in contrast to, e.g., results on self-testing of high dimensional entanglement, which are only relevant when the noise rate goes to zero with the system's size n. As a corollary of our result, we supply a lower-bound on the entanglement cost of any state achieving a quantum advantage in a bipartite non-local game. Our proof techniques heavily rely on ideas from the work on classical and quantum parallel repetition theorems.
(2018) Nature Communications. 9, 459. Abstract
Device-independent cryptography goes beyond conventional quantum cryptography by providing security that holds independently of the quality of the underlying physical devices. Device-independent protocols are based on the quantum phenomena of non-locality and the violation of Bell inequalities. This high level of security could so far only be established under conditions which are not achievable experimentally. Here we present a property of entropy, termed "entropy accumulation", which asserts that the total amount of entropy of a large system is the sum of its parts. We use this property to prove the security of cryptographic protocols, including device-independent quantum key distribution, while achieving essentially optimal parameters. Recent experimental progress, which enabled loophole-free Bell tests, suggests that the achieved parameters are technologically accessible. Our work hence provides the theoretical groundwork for experimental demonstrations of device-independent cryptography.
(2016) 11th Conference on the Theory of Quantum Computation, Communication and Cryptography, TQC 2016. Broadbent A.(eds.). Abstract
Randomness extractors, widely used in classical and quantum cryptography and other fields of computer science, e.g., derandomization, are functions which generate almost uniform randomness from weak sources of randomness. In the quantum setting one must take into account the quantum side information held by an adversary which might be used to break the security of the extractor. In the case of seeded extractors the presence of quantum side information has been extensively studied. For multi-source extractors one can easily see that high conditional min-entropy is not sufficient to guarantee security against arbitrary side information, even in the classical case. Hence, the interesting question is under which models of (both quantum and classical) side information multi-source extractors remain secure. In this work we suggest a natural model of side information, which we call the Markov model, and prove that any multi-source extractor remains secure in the presence of quantum side information of this type (albeit with weaker parameters). This improves on previous results in which more restricted models were considered or the security of only some types of extractors was shown.
(2016) IEEE Transactions on Information Theory. 62, 3, p. 1440-1457 Abstract
In the context of multiplayer games, the parallel repetition problem can be phrased as follows: given a game G with optimal winning probability 1 - alpha and its repeated version Gn (in which n games are played together, in parallel), can the players use strategies that are substantially better than ones in which each game is played independently? This question is relevant in physics for the study of correlations and plays an important role in computer science in the context of complexity and cryptography. In this paper, the case of multiplayer non-signaling games is considered, i.e., the only restriction on the players is that they are not allowed to communicate during the game. For complete-support games (games where all possible combinations of questions have non-zero probability to be asked) with any number of players, we prove a threshold theorem stating that the probability that non-signaling players win more than a fraction 1-alpha+beta of the n games is exponentially small in n beta(2) for every 0
(2015) Journal of Mathematical Physics. 56, 5, 052203. Abstract
When analysing quantum information processing protocols, one has to deal with large entangled systems, each consisting of many subsystems. To make this analysis feasible, it is often necessary to identify some additional structures. de Finetti theorems provide such a structure for the case where certain symmetries hold. More precisely, they relate states that are invariant under permutations of subsystems to states in which the subsystems are independent of each other. This relation plays an important role in various areas, e.g., in quantum cryptography or state tomography, where permutation invariant systems are ubiquitous. The known de Finetti theorems usually refer to the internal quantum state of a system and depend on its dimension. Here, we prove a different de Finetti theorem where systems are modelled in terms of their statistics under measurements. This is necessary for a large class of applications widely considered today, such as device independent protocols, where the underlying systems and the dimensions are unknown and the entire analysis is based on the observed correlations. (C) 2015 Author(s).
(2014) IEEE Transactions on Information Theory. 62, 3, p. 1440-1457 Abstract
In the context of multiplayer games, the parallel repetition problem can be
phrased as follows: given a game $G$ with optimal winning probability
$1-\alpha$ and its repeated version $G^n$ (in which $n$ games are played
together, in parallel), can the players use strategies that are substantially
better than ones in which each game is played independently? This question is
relevant in physics for the study of correlations and plays an important role
in computer science in the context of complexity and cryptography. In this work
the case of multiplayer non-signalling games is considered, i.e., the only
restriction on the players is that they are not allowed to communicate during
the game. For complete-support games (games where all possible combinations of
questions have non-zero probability to be asked) with any number of players we
prove a threshold theorem stating that the probability that non-signalling
players win more than a fraction $1-\alpha+\beta$ of the $n$ games is
exponentially small in $n\beta^2$, for every $0\leq \beta \leq \alpha$. For
games with incomplete support we derive a similar statement, for a slightly
modified form of repetition. The result is proved using a new technique, based
on a recent de Finetti theorem, which allows us to avoid central technical
difficulties that arise in standard proofs of parallel repetition theorems.
(2012) Physical Review A. 86, 6, 062333. Abstract
The task of privacy amplification, in which Alice holds some partially secret information with respect to an adversary Eve and wishes to distill it until it is completely secret, is known to be solvable almost optimally in both the classical and quantum worlds. Unfortunately, when considering an adversary who is limited only by nonsignaling constraints such a statement cannot be made in general. We here consider systems which violate the chained Bell inequality and prove that under the natural assumptions of a time-ordered nonsignaling system, which allow past subsystems to signal future subsystems (using the device's memory for example), superpolynomial privacy amplification by any hashing is impossible. This is of great relevance when considering practical device-independent key-distribution protocols which assume a superquantum adversary. DOI: 10.1103/PhysRevA.86.062333
Towards the Impossibility of Non-Signalling Privacy Amplification from Time-Like Ordering Constraints(2012) arXiv. Abstract
In the past few years there was a growing interest in proving the security of cryptographic protocols, such as key distribution protocols, from the sole assumption that the systems of Alice and Bob cannot signal to each other. This can be achieved by making sure that Alice and Bob perform their measurements in a space-like separated way (and therefore signalling is impossible according to the non-signalling postulate of relativity theory) or even by shielding their apparatus. Unfortunately, it was proven in [E. Haenggi, R. Renner, and S. Wolf. The impossibility of non-signaling privacy amplification] that, no matter what hash function we use, privacy amplification is impossible if we only impose non-signalling conditions between Alice and Bob and not within their systems. In this letter we reduce the gap between the assumptions of Haenggi et al. and the physical relevant assumptions, from an experimental point of view, which say that the systems can only signal forward in time within the systems of Alice and Bob. We consider a set of assumptions which is very close to the conditions above and prove that the impossibility result of Haenggi et al. still holds.