Daniel Kyungdeock Park, Korea Advanced Institute of Science and Technology
Machine learning is an interesting family of problems for which near-term quantum devices can provide considerable advantages. In particular, exponential quantum speedup is recently demonstrated in learning a Boolean function that calculates the parity of a randomly chosen input bit string and a hidden bit string in the presence of noise, the problem known as learning parity with noise (LPN). In the first part of the talk, I will present a new quantum LPN algorithm with which the quantum advantage is retained even when most of the qubits are in the maximally mixed state, as long as one qubit in the system has non-zero polarization. In this scenario, the quantum LPN naturally becomes deterministic quantum computation with one qubit. Then the hidden parity function can be revealed by performing a set of operations that can be interpreted as measuring non-local observables. I will also discuss the source of the quantum advantage in the algorithm from the resource-theoretic point of view. In practical circumstances, a learner often receives noisy classical data, rather than noisy quantum data as considered in the original quantum LPN algorithm. Then, whether the quantum technology can still be beneficial is an interesting open problem. In the second part of the talk, I will discuss a quantum-classical hybrid algorithm for learning the hidden parity function from the noisy classical data. Furthermore, I will discuss the quantum random access memory for loading classical data to quantum format as required in many quantum algorithms.