All talks will be in Lazaridis QNC 0101 and everyone is welcome to attend.
Monday, October 21, 2019
Tuesday, October 22, 2019
Wednesday, October 23, 2019
Thursday, October 24, 2019
Abstracts
Marek Gluza, Freie Universität Berlin
Quantum quantum simulators
State of the art quantum simulators allow to explore static and dynamical properties which opens a new pathway for studying otherwise intractable quantum manybody systems. Despite many exciting developments, the relatively young field of quantum simulation still offers space for new theoretical advances to emerge at an intersection of physics, mathematics and computer science. In the first part of my talk I will discuss our new innovation of tomographic reconstructions in quantum manybody systems overcoming some of the current readout limitations in quantum simulators that prevent monitoring the quantum properties which one would like to understand in the first place. I will show the application of the method in continuous field quantum simulators giving insights about equilibrium states, dynamics of phonon occupation numbers and even allowing to predict recurrences of dynamics. In the second part I will switch gears to complexity theory and discuss an average case hardness result for tensor network algorithms. Tensor network methods have proven to be an excellent tool for classically simulating quantum systems in onedimension and we provide a step towards understanding whether there is a fundamental limitation to their performance in higher dimensions. These two lines of research are two examples of the interdisciplinary efforts to understand and harness the quantum nature of nature and to develop fully quantum quantum simulators.
Patricia Contreras Tejada, Instituto de Ciencias Matemáticas
A resource theory of entanglement with a unique multipartite maximally entangled state
Patricia ContrerasTejada,^{1}Carlos Palazuelos,^{2,1} and Julio I. de Vicente^{3}
^{1} Instituto de Ciencias Matemáticas, E28049 Madrid, Spain
^{2} Departamento de AnÃjlisis Matemático y Matemática Aplicada, Universidad Complutense de Madrid, E28040 Madrid, Spain
^{3}Departamento de Matemáticas, Universidad Carlos III de Madrid, E28911, Leganés (Madrid), Spain
Entanglement theory is formulated as a quantum resource theory in which the free operations are local operations and classical communication (LOCC). This defines a partial order among bipartite pure states that makes it possible to identify a maximally entangled state, which turns out to be the most relevant state in applications. However, the situation changes drastically in the multipartite regime. Not only do there exist inequivalent forms of entanglement forbidding the existence of a unique maximally entangled state, but recent results have shown that LOCC induces a trivial ordering: almost all pure entangled multipartite states are incomparable (i.e. LOCC transformations among them are almost never possible). In order to cope with this problem we consider alternative resource theories in which we relax the class of LOCC to operations that do not create entanglement. We consider two possible theories depending on whether resources correspond to multipartite entangled or genuinely multipartite entangled (GME) states and we show that they are both nontrivial: no inequivalent forms of entanglement exist in them and they induce a meaningful partial order (i.e. every pure state is transformable to more weakly entangled pure states). Moreover, we prove that the resource theory of GME that we formulate here has a unique maximally entangled state, the generalized GHZ state, which can be transformed to any other state by the allowed free operations.
Tamara Kohler, University College London
Toy models of holographic duality between local Hamiltonians
Holographic quantum error correcting codes (HQECC) have been proposed as toy models for the AdS/CFT correspondence, and exhibit many of the features of the duality. HQECC give a mapping of states and observables. However, they do not map local bulk Hamiltonians to local Hamiltonians on the boundary. In this work, we combine HQECC with Hamiltonian simulation theory to construct a bulkboundary mapping between local Hamiltonians, whilst retaining all the features of the HQECC duality. This allows us to construct a duality between models, encompassing the relationship between bulk and boundary energy scales and time dynamics.
It also allows us to construct a map in the reverse direction: from local boundary Hamiltonians to the corresponding local Hamiltonian in the bulk. Under this boundarytobulk mapping, the bulk geometry emerges as an approximate, lowenergy, effective theory living in the codespace of an (approximate) HQECC on the boundary. At higher energy scales, this emergent bulk geometry is modified in a way that matches the toy models of black holes proposed previously for HQECC. Moreover, the duality on the level of dynamics shows how these toymodel black holes can form dynamically.
Sisi Zhou, Yale University
Quantum error correction in quantum metrology
Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit (HL), which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. In this talk, we review our recent efforts in enhancing measurement precision through quantum error correction (QEC), a general method for protecting a quantum system from the damaging effects of noise. The metrological scheme considered here is the sequential scheme where a signal Hamiltonian under general Markovian noise is sensed via a single probe with access to noiseless ancillas and fast quantum controls. The QEC strategy aims at achieving a balance between correcting the noise and preserving the signal. The HL in the sequential scheme manifests itself as a “1/t” scaling in the measurement precision where t is the total sensing time. We put forward a necessary and sufficient condition—the HNLS condition—for achieving the HL in both the singleparameter and the multiparameter case. When HNLS is satisfied, the QEC code achieving the best possible precision can be found by solving a (quadratic) semidefinite program. When HNLS is violated, the measurement precision follows the standard quantum limit (SQL)—a “1/√t” scaling in the measurement precision, but there is still an optimal approximate QEC strategy achieving the optimal measurement precision allowed.
Daniel Stilck França, University of Copenhagen
Faster quantum and classical SDP approximations for quadratic binary optimization
We give a quantum speedup for solving the canonical semidefinite programming relaxation for binary quadratic optimization. The class of relaxations for combinatorial optimization has so far eluded quantum speedups. Our methods combine ideas from quantum Gibbs sampling and matrix exponent updates. A dequantization of the algorithm also leads to a faster classical solver. For generic instances, our quantum solver gives a nearly quadratic speedup over stateoftheart algorithms. We also provide an efficient randomized rounding procedure that converts approximately optimal SDP solutions into constant factor approximations of the original quadratic optimization problem.
This is joint work with Fernando Brandao and Richard Kueng (Caltech).
Christoph Hirche, University of Copenhagen
The quantum information bottleneck: Properties and applications
QMATH, Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark
In classical information theory, the information bottleneck method (IBM) can be regarded as a method of lossy data compression which focuses on preserving meaningful (or relevant) information. As such it has recently gained a lot of attention, primarily for its applications in machine learning and neural networks. A quantum analogue of the IBM has recently been defined, and an attempt at providing an operational interpretation of the socalled quantum IB function as an optimal rate of an informationtheoretic task, has recently been made by Salek et al. However, its proof is based on a conjecture that the quantum IB function is convex and the expression for the rate function involves certain entropic quantities which occur explicitly in the very definition of the underlying informationtheoretic task, thus making the latter somewhat contrived. We overcome both of these drawbacks by first proving the convexity of the quantum IB function, and then giving an alternative operational interpretation of it as the optimal rate of a bona fide informationtheoretic task, namely that of quantum source coding with quantum side information at the decoder. We similarly investigate properties of the related privacy funnel function. Finally, we discuss several fundamental problems that need to be overcome before the quantum IBM can be used for applications such as quantum machine learning and provide some general progress towards their resolution.
Based on joint work with Nilanjana Datta and Andreas Winter.
Geoff Penington, Stanford University
Random states, gravity and the Page curve
Random quantum states obey a Page curve, where the entanglement entropy is controlled by the size of the smaller subsystem. It has long been conjectured that evaporating black holes obey a similar Page curve; however until recently it wasn't possible to calculate this within the gravitational description of the theory. I shall focus on a very simple toy model version of an evaporating black hole, emphasizing the similarities between the gravity and random state calculations.
Ángela Capel Cuevas, Instituto de Ciencias Matemáticas
On the modified logarithmic Sobolev inequality for the heatbath dynamics for 1D systems
The mixing time of Markovian dissipative evolutions of open quantum manybody systems can be bounded using optimal constants of certain quantum functional inequalities, such as the modified logarithmic Sobolev constant. For classical spin systems, the positivity of such constants follows from a mixing condition for the Gibbs measure, via quasifactorization results for the entropy.
Inspired by the classical case, we present a strategy to derive the positivity of the modified logarithmic Sobolev constant associated to the dynamics of certain quantum systems from some clustering conditions on the Gibbs state of a local, commuting Hamiltonian. In particular we show that for the heatbath dynamics for 1D systems, the modified logarithmic Sobolev constant is positive under the assumptions of a mixing condition on the Gibbs state and a strong quasifactorization of the relative entropy.
Based on joint work with Ivan Bardet, Angelo Lucia, David PérezGarcía and Cambyse Rouzé (arXiv: 1908.09004).
Juani BermejoVega, Freie Universität Berlin
Closing gaps of a quantum advantage with shorttime Hamiltonian dynamics
A nearterm goal in quantum computation and simulation is to realize a quantum device showing a computational advantage. The goal here is to perform a quantum experiment whose outcome cannot be efficiently predicted on a classical computer. A hope of this program is that performing such an experiment may be simpler than building a universal quantum computer. Candidate quantum devices for this task include boson samplers and GoogleAI’s random quantum circuits.
In this talk, we will review the current approaches towards demonstrating superior quantum computational power, as well as associated challenges concerning scalability, verifiability and complexity theoretic soundness. We will present a proposal based on shorttime evolutions of 2D Ising models [13]. Our proposal has the benign features of being hard to simulate classically (assuming plausible complexity theoretic conjectures) while being reasonably close to coldatomic quantum implementations, and admitting an efficient simple quantum verification protocol. We will also present recent complexitytheoretic results (on anticoncentration and averagecase hardness) [3], giving the strongest evidence to date that Hamiltonian quantum simulation architectures are classically intractable. Our work shows that realistic quantum simulators can demonstrate reliable quantum advantages.
References:
[1] J. BermejoVega, D. Hangleiter, M. Schwarz, R. Raussendorf, and J. Eisert, Architectures for quantum simulation showing a quantum speedup, Phys. Rev. X 8, 021010, https://arxiv.org/abs/1703.00466
[2] D. Hangleiter, J. BermejoVega, M. Schwarz, and J. Eisert, Anticoncentration theorems for schemes showing a quantum speedup, Quantum 2, 65 (2018), https://arxiv.org/abs/1706.03786
[3] Jonas Haferkamp, Dominik Hangleiter, Adam Bouland, Bill Fefferman, Jens Eisert, Juani BermejoVega, Closing gaps of a quantum advantage with shorttime Hamiltonian dynamics, https://arxiv.org/abs/1908.08069
Special seminar: Damian Markham, Laboratoire d'Informatique de Paris 6, Sorbonne Université
Faulttolerant quantum speedup with constant depth circuits
One of the main goals in the field of quantum computing is demonstrating a quantumoverclassical advantage (or simply quantum advantage). A key obstacle in the way of achieving this goal is noise, which destroys whatever quantum advantage a quantum device possesses. In this work, we present a new example of a sampling problem, defined by local measurements on efficiently preparable 2D graph states, which cannot be performed efficiently (in polynomial time) on a classical computer unless complexitytheoretic conjectures which are widely believed to be true, turn out to be false, dubbed quantum speedup. Furthermore, this sampling problem is robust against general types of noise, by virtue of quantum error correction, and also possesses desirable properties for experimental implementation such as low overhead, nearest neighbor interactions, regular structure, and fixed angle, nonadaptive Pauli measurements. Our sampling problem can be viewed in the circuit model picture as a constant depth circuit acting on a polynomial number of ancilla qubits. Therefore, we present a quantum circuit with constant depth giving rise to a sampling problem which demonstrates a quantum speedup, and which is robust to noise.
Benjamin Brown, The University of Sydney
Advances in surface code quantum computation
A scalable quantum computer will require robust logical qubits that can be maintained for an arbitrarily long time. We will also need to perform a universal set of faulttolerant logical operations on the encoded qubits to complete computations. With these desiderata in mind the surface code [1, 2] quickly emerged as one of the leading candidate quantum errorcorrecting codes to maintain the logical qubits of the first generation of scalable quantum computers. Its relatively simple construction, which is in part due to its twodimensional layout, means that this code is particularly appealing for fabrication in the laboratory. While the relative simplicity of the surface code is advantageous to its realisation, a drawback of the model is the considerable number of physical qubits that are required to protect a single logical qubit. There is also a significant overhead when performing faulttolerant logical gates with this model. Indeed, it has long been proposed that we complete a universal set of faulttolerant gates with resourceintensive distillation methods. Presently, it remains difficult to reduce the noise of laboratory hardware below the threshold rate of errors that the surface code can tolerate. In this talk I will summarise some recent results that alleviate these issues. I will first discuss work showing how to complete a universal set of fault tolerant logical gates with the surface code without the need for distillation methods. This includes work showing how we can braid Majorana modes lying at the corners of the planar code to realise Clifford operations [3]. I will also explain how the surface code can be connected to its threedimensional generalisation to realise a nonClifford gate using a twodimensional system [4]. Both of these proposals offer new routes to reduce the resource cost of scalable quantum computation as they circumvent the need for conventional distillation methods to complete a universal set of logic gates. Finally, I will briefly mention new results where we show that we can increase the threshold of a tailored variant of the surface code by specialising the decoder to deal with the common situation where the noise each qubit experiences is biased towards dephasing [5]. This development means that the surface code can tolerate a higher rate of noise such that it is more readily constructed in the laboratory with modern hardware.
References:
[1] A. Y. Kitaev, Ann. Phys. 303, 2 (2003).
[2] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, J. Math. Phys. 43, 4452 (2002).
[3] B. J. Brown, K. Laubscher, M. S. Kesselring, and J. R. Wootton, Phys. Rev. X 7, 021029 (2017).
[4] B. J. Brown, arXiv:1903.11634 (2019).
[5] D. K. Tuckett, S. D. Bartlett, S. T. Flammia, and B. J. Brown, arXiv:1907.02554 (2019).
Chris Cade, Centrum Wiskunde and Informatica
Narrowing down the important resources for quantum computation
An open problem in quantum computing is to understand precisely which property (or properties) of quantum physics allows quantum computers to seemingly outperform their classical counterparts. In this talk I will consider two ways to approach this question: by studying restricted models of quantum computation and seeing if they can still outperform classical computers for a variety of interesting problems; and by seeing what happens when we add powers to quantum and classical computers. I will begin by discussing a couple of different results relating to the computational power of the one clean qubit model, and then introduce the notion of postselection as it relates to computation, focusing on some recent results that compare classical and quantum computation with postselection in the query complexity setting.
Saeed Mehraban, California Institute of Technology
The oldnew extrapolation technique and its quantum interfaces
The logarithm of an analytic function can be approximated by truncated Taylor series around a given point if the function is nonzero on a wide neighborhood around that point. This folklore knowledge was very recently utilized by Barvinok and Soberon, and in several subsequent works, to design efficient algorithms for specific instances of the permanent and partition functions which are known to be hard in the worst case. In this talk, I explain the following implications and connections between this method, known as the extrapolation technique, and quantum computation and quantum statistical physics:

There is a quasipolynomialtime algorithm that approximates the permanent of Gaussian matrices with vanishing mean. The complexity of random permanents is related to the computational power of a linear optical experiment known as Boson Sampling. A baseline conjecture to certify the power of Boson Sampling is that the permanent is hard to approximate for i.i.d. Gaussian matrices with zero mean and unit variance. Our result shows that if the matrix has a nonzero but vanishing mean, the permanent can indeed be approximated classically. This suggests the complexity of a random permanent would face an efficienttohard transition as the mean value decreases. This is based on joint work with Lior Eldar.

There is a quasipolynomialtime algorithm that approximates the partition function of a quantum manybody system above the phase transition temperature. This problem is known to be NPhard below the phase transition.

We prove that the exponential decay of correlations in quantum manybody systems is a necessary condition for the partition function to be nonzero around a wide neighborhood of a given temperature. Hence, the extrapolation technique for quantum partition functions works only in the regime where correlations decay exponentially fast.
Items 2 and 3 were established in joint work with Aram Harrow and Mehdi Soleimanifar.
Shima Bab Hadiashar, Institute for Quantum Computing
On the entanglement cost of oneshot compression
We revisit the task of compressing an ensemble of quantum states in the oneshot setting. The protocols achieving the best compression use shared entanglement that may be much larger than the original message, while others (with potentially larger communication cost) have entanglement cost bounded by the message length. This motivates the question as to whether entanglement is truly necessary for compression, and if so, how much of it is needed.
Motivated by questions in communication complexity, we lift certain restrictions imposed on com pression protocols in tasks such as statesplitting and channel simulation. We show that an ensemble constructed by Jain, Radhakrishnan, and Sen (ICALP’03) saturates the known bounds on the sum of communication and entanglement costs, even with the relaxed compression protocols we study.
This is a joint work with Ashwin Nayak.
Alexandru Gheorghiu, California Institute of Technology
Computationallysecure and composable remote state preparation
Quantum computers are expected to efficiently solve certain problems that are believed to be computationally intractable to classical computers. This raises the question: how can classical users efficiently verify the results produced by quantum computers? In this talk I will discuss one approach to answering this question. I will present a protocol between a classical polynomialtime verifier and a quantum polynomialtime prover that realizes the following functionality, with computational security: the verifier chooses one of the observables Z, X, Y, (X+Y)/√2, (XY)/√2; the prover receives a uniformly random eigenstate of the observable chosen by the verifier; the verifier receives a classical description of that state. The prover is unaware of which state he received and moreover, the verifier can check with high confidence whether the preparation was successful.
This remote state preparation primitive can then be used to perform blind and verifiable delegated quantum computation (DQC). Recently, both blind and verifiable DQC were shown to be possible, under computational assumptions, with a classical polynomialtime client (Mahadev 2017, Mahadev 2018). Compared to the work of Mahadev, our protocol is more modular, applies to the measurementbased model of computation (instead of the Hamiltonian model) and is composable.
The talk is based on joint work with Thomas Vidick: https://arxiv.org/abs/1904.06320.
Tina Zhang, California Institute of Technology
Classical zeroknowledge arguments for quantum computations
We show that every language in QMA admits a classicalverifier, quantumprover zeroknowledge argument system which is sound against quantum polynomialtime provers and zeroknowledge for classical (and quantum) polynomialtime verifiers. The protocol builds upon two recent results: a computational zeroknowledge proof system for languages in QMA, with a quantum verifier, introduced by Broadbent et al. (FOCS 2016), and an argument system for languages in QMA, with a classical verifier, introduced by Mahadev (FOCS 2018).
Srijita Kundu, National University of Singapore
Querytocommunication lifting theorems for adversary bounds
For a query function f :{0,1}^{n} →{0,1}, one can define a corresponding communication function f ◦ G^{n}, by composing f with a suitable communication function, or gadget G. A querytocommunication lifting theorem is then a lower bound of a communication complexity measure of f ◦ G^{n} in terms of a query complexity measure of f. Lifting theorems lower bounding deterministic communication complexity by deterministic query complexity, and randomized communication complexity by randomized query complexity are known. However, these lifting theorems require large gadgets and their proof techniques are not known to be generalizable for a quantum lifting theorem.
In this work, we present a new technique for lower bounding communication complexity of f ◦ G^{n} by adversary bounds on f, with G being a constant sized gadget. There are a number of adversary bounds in query complexity – the classical adversary bound is a lower bound on randomized query complexity, which is known to be cubically tight for total functions. The positive weights adversary bound is a lower bound on quantum query complexity and its generalization, the negative weights adversary bound, is a tight characterization of the same. Our technique allows us to lower bound the randomized communication complexity of f ◦ G^{n} by the classical adversary bound on f , which is stronger than previously known results for constant gadgets. Our proof easily generalizes to quantum communication complexity for bounded round protocols as well. For unbounded round quantum protocols, we present some progress towards lower bounding the communication complexity by the positive weights adversary bound.
Based on joint work with Anurag Anshu and Shalev BenDavid.
Richard Jakob Kueng, California Institute of Technology
Predicting features of quantum systems using classical shadows
Predicting features of complex, largescale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict M linear functions with bounded HilbertSchmidt norm from only log(M) measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches.
Joint work with HsinYuan (Robert) Huang.
Reference: arXiv:1908.08909 (2019)
Yuan Su, University of Maryland
Nearly optimal lattice simulation by product formulas
Product formulas provide a straightforward yet surprisingly efficient approach to quantum simulation. We show that this algorithm can simulate an nqubit Hamiltonian with nearestneighbor interactions evolving for time t using only (nt)^{1+o(1)} gates. While it is reasonable to expect this complexity—in particular, this was claimed without rigorous justification by Jordan, Lee, and Preskill—we are not aware of a straightforward proof. Our approach is based on an analysis of the local error structure of product formulas, as introduced by Descombes and Thalhammer and significantly simplified here. We prove error bounds for canonical product formulas, which include wellknown constructions such as the LieTrotterSuzuki formulas. We also develop a local error representation for timedependent Hamiltonian simulation, and we discuss generalizations to periodic boundary conditions, constantrange interactions, and higher dimensions. Combined with a previous lower bound, our result implies that product formulas can simulate lattice Hamiltonians with nearly optimal gate complexity.
This talk is based on joint work with Andrew M. Childs [1].
Reference:
[1] Andrew M. Childs and Yuan Su. Nearly optimal lattice simulation by product formulas. Physical Review Letters, 123:050503, 2019. arXiv:1901.00564.