Some Learning Bounds and Guarantees for Testing (Quantum) Hypotheses
Machine learning is a powerful tool, yet we often do not know how well a learning algorithm might perform on any given task. One standard approach to bound the accuracy of a learning algorithm is to reduce the learning task to hypothesis testing. Fano's inequality then states that a large amount of mutual information between the learner's observations and the set of unknown parameters is a necessary condition for success.
In this talk, I will describe how such a condition is also sufficient for succeeding at some learning tasks, thereby providing a purely information-theoretic guarantee for learning. Noting that this guarantee has an immediate extension to quantum information theory, I will then introduce the task of "testing quantum hypotheses", in which the unknown parameters of the learning task are prepared in a quantum register in superposition (rather than being sampled stochastically) and the learner's success at this task is measured by their ability to establish quantum correlations with that register. I will discuss ongoing attempts to characterize this scenario.
Add event to calendar