Bill Fefferman, University of Maryland and National Institute of Standards and Technology
Proving rigorous separations between the power of quantum and classical computation is a primary goal of computational complexity theory. Starting with work of Terhal and DiVincenzo, and Bremner, Jozsa, and Shepherd, it has been established that quantum computers can efficiently sample from distributions that can’t be sampled by randomized classical algorithms. These hardness results do not hold in the case of “approximate” sampling—and as such do not forbid the existence of a classical algorithm that samples from a distribution close in statistical distance to the idealized quantum distribution. This notion of approximate hardness is motivated both theoretically and experimentally, where it is infeasible that any physical implementation of the quantum computer can sample exactly from its ideal outcome distribution. In this talk I’ll give an overview of these sampling separations and introduce a new framework for studying the hardness of approximate sampling based on the ubiquitous quantum primitive of Fourier sampling. We’ll also talk about relations between this work and that of Aaronson and Arkhipov, and Bremner, Montanaro, and Shepherd.