Quantum Computational Intelligence

Imagine solving mathematical problems where you could use the full physical range of computational possibilities within the laws of the universe, and be inspired by the sublime algorithmic intelligence of the human brain. This is precisely why the emerging field of quantum machine learning (QML) has received so much recent attention. In this blog post, we’d like to discuss the fundamental ideas and applied value of machine learning to computation in general, and then contextualize these ideas in a new way within the paradigm of quantum computation.

Machine learning – a subfield of computer science related to computational statistics and pattern recognition – emerged in its modern incarnation in the mid-late 20th century as researchers attempted to build thinking machines. While first-generation artificial intelligence took inspiration from the computers of the 1980s to reason about intelligence and view humans like deterministic, syntactical machines, contemporary artificial intelligence instead chooses to build machines that have the adaptability and variability of human in “coping” with the ill-defined problem of being an individual with incomplete information in a complex world. This has led to the development of many “learning” algorithms that use probabilistic techniques to improve the ability of a machine to process and reason about data over time. These problems can be broadly broken down into supervised learning (pattern recognition; classification), unsupervised learning (clustering), and reinforcement learning (punishment/reward of a computational strategy in a well-defined setting). Machine learning in the classical realm has already been used to solve a variety of problems, including speech recognition, anomaly detection, predictive modelling, image classification, and data mining across fields as diverse as bioinformatics, finance, information security, aerospace navigation, pharmaceutical chemistry and many more.

The value of quantum machine learning comes from the potential for us to perform learning tasks far more efficiently – or even, using entirely different computational tools – than classical algorithms. While quantum machine learning has tremendous potential for application, there are still a large number of open problems for researchers in this emerging field.

On the nature of computation, or: Why machine intelligence isn’t special

There seems to be a lot of debate in the quantum computing research community specifically about machine learning. We propose a reductionist computational view of cognition, in that we view it as any other computational task. For some this will strip the idea of quantum machine learning of its magic – but for others (ourselves included), it is exactly the opposite.

While quantum computation is not a panacea for all problems we could want to solve, it is just as much the case that machine learning algorithms are not “special” – we can reason about their usefulness and efficiency as we would algorithms for search, factoring, and other computational problems. There is, from the current reductionist state of artificial intelligence, no compelling argument for an ethereal quality to the types of computations required for machine intelligence and learning compared to any other computational tasks. While it is existentially challenging to know with increasing certainty that there may be little that separates the human mind from a string of bits or even qubits, it is also very beautiful to imagine, too, that we are each an elaborate chain of computations.

There exist many very good classical machine learning algorithms. Regardless of whether an algorithm is classical or quantum, we fundamentally care about performance in terms of the resources (time; memory) used during the computation, and the accuracy of the results. Since we are able to implement different algorithms on quantum computers than we can on classical computers, the reason then to use quantum computation for machine learning is in the case where a quantum algorithm can outperform a classical algorithm for some type of learning task. We already know that there are several cases where this is certainly true.

The progress thus far has generally been to use quantum algorithms (as “subroutines”) for part of a broader scheme that mostly involves classical computation, to speed up the areas where the subroutine has been applied. However, one could also imagine using a fully-quantum algorithm for machine learning where the data itself is quantum, and the entirety of the machine learning algorithm is reliant upon the unique properties of quantum computation. In the sections that follow, we’ll present some examples of both of these approaches to quantum machine learning.

Speeding up machine learning tasks through quantum subroutines

Machine learning algorithms often require many moving parts to extract meaningful patterns from a dataset. Interestingly, not all of these parts of the algorithm necessarily need to be quantum in order to use quantum computation to improve overall performance. Many of the important papers we’ve seen in machine learning use an approach whereby classical machine learning techniques are applied, but that a piece of the classical algorithm is replaced by something for which a quantum computer can perform much better. In these cases, the data that is being operated on is classical, but we benefit from a quantum speedup.

A particularly high-value quantum subroutine that we see in the literature so far has been the application of Grover’s algorithm, which provides quadratic speedup in the search of unstructured databases. Researchers thus far have identified several classical machine learning algorithms – such as k-Medians, hierarchical clustering, and even artificial neural networks – where quantum subroutines (including but not limited to Grover’s Algorithm) can be used. Problems that so far lend themselves particularly well to quantum subroutines are those that involve pattern-matching which receives a quadratic speedup through the use of Grover’s search. As our knowledge of quantum algorithms expands and more researchers work on problems of machine learning using these algorithmic tools, it is almost certain that other quantum subroutines will be found to augment other learning tasks.  

­­Interlude: What is “quantumness”?

A common misconception in quantum information is that the "power" of quantum computing comes from the fact that the wave function is represented by an exponential (in the number of qubits) number of coefficients. Although the latter statement is certainly true, it is nevertheless not the main ingredient for quantum speedup. As a counter-example, imagine a classical probabilistic theory (sometimes also called a fuzzy-logic theory), e.g., a system consisting of \(n\) (classical) bits \(b_1, b_2, \ldots, b_n\), where each state of the system can occur with the corresponding probability \(p_{i_1\ldots i_n}\), with \(i_1, i_2,\ldots, i_n\in\{0,1\}\). It is now obvious that to specify the complete state of such system one needs the values of \(p_{i_1\ldots i_n}\) for every possible \(i_1, i_2,\ldots, i_n\in\{0,1\}\), i.e., \(2^n\) real coefficients, which is the same as the number of coefficients required in the quantum case.

The only major difference is that in the classical probabilistic theories the coefficients are real and they must transform according to a stochastic evolution, whereas in the quantum case the coefficients are complex and the allowed transformations are unitary. Or, more technically, a classical probabilistic theory is described by an \(L^1\) space over the reals, whereas quantum mechanics is described by an \(L^2\) space over the complex numbers. The mere fact that in the quantum case the coefficients can be complex has a profound implication: it means that during the evolution, amplitudes can interfere. Good quantum algorithms exploit this fact to the maximum: the "desired" amplitudes interfere constructively, whereas the ones that correspond to non-solutions interfere destructively. Due to the fact that the total probability must be conserved, this results in a "boost" of the amplitudes that correspond to a solution, which can happen (much) faster than in the classical case. Therefore, what makes quantum computer "tick" is not the fact that “Hilbert space is a big space" (Carlton Caves), but rather the intrinsic structure of the \(L^2\) space over complex numbers. \(L^1\) over reals is an equally big place, but way more boring…[1]

 

[1] Of course, \(L_1\) is not a Hilbert space. In fact, the only \(L^p\) space that is also a Hilbert space is \(L^2\).

Toward (fully) quantum algorithms for quantum machine learning

A ”fully” quantum algorithm for machine learning would go beyond quantum subroutines, to performing quantum algorithms as the primary “learning” task on quantum inputs and outputs, and may sometimes need to use quantum RAM (qRAM) for the storage of states in superposition. These tasks can take advantage of a variety of the unique properties of quantum computation, as we’ll explore in the examples below. [J. Adcock et al, “Advances in Quantum Machine Learning”, arXiv:1512.02900 [quant-ph]]

1. Neural Networks

Neural networks are based on graphs that are layered, each node of the graph corresponding to a neuron, and the edges corresponding to synapses. Each edge has a corresponding weight, which are adjusted in the training phase via gradient descent and back-propagation. Some promising approaches to quantum neural networks algorithms are via quantum walks and Boltzmann machines. For the latter, the configuration of graph nodes and edges is given by a Gibbs distribution, and the objective is to minimize the maximum-likelihood of the distribution via gradient-descent. Classically, computing the required gradient takes exponential time in the number of nodes. [Wiebe et al, "Quantum Deep Learning", arXiv:1412.3489 [quant-ph]] made recent progress in this area, however their training algorithm requires access to data in superposition, i.e., to a Quantum Random Access Memory (qRAM). Very recently, a classical equivalent algorithm has been found [Wiebe et al, "Quantum Inspired Training for Boltzmann Machines", arXiv: 1507.02642 [quant-ph]].

2. Bayesian Networks

These are probabilistic directed acyclic graphs that represent dependencies among a set of random variables. They are important in classical machine learning as they are often used to compute the probability of new data belonging to an existing class via comparison with the training data. Quantum Bayesian networks have been investigated from a theoretical point of view [Moras et al, "Hidden Quantum Markov Models and Non-adaptive Read-out of Many-body States" (2010); R. R. Tucci, "Quantum Bayesian Nets", Int. J. of Mod. Phys. B, 9, 295 (1995)], however there is little experimental work done in this field.

3. HHL: Solving Linear Systems of Equations or Matrix Inversion algorithm

This is an algorithm for solving a system of linear equations [Harrow et al, "Quantum Algorithm for Linear Systems of Equations", PRL 103, 150502 (2009)] exponentially faster compared to any classical algorithm. Such a scheme can be used to speed-up the perceptron training in neural networks. In fact, can be used for most optimization problems in which the function to be optimized looks quadratic.  See [S. Aaronson, "Read the Fine Print", Nature Physics, 11, 291 (2015)] for some caveats and limitations.

4. Principal Component Analysis

Principal component analysis reduces the dimensionality of the data via transforming the original data to a new smaller uncorrelated set that preserves the main characteristics of the original data. The whole procedure reduces to computing the eigenvalues of the data covariance matrix. A quantum version of the algorithm was recently suggested in [S. Lloyd et al, "Quantum Principal Component Analysis", Nature Physics, 10, 631 (2014)]. Some caveats are the need for large eigenvalues in the density matrix and the necessity of a qRAM.

5. Nearest-Centroid Algorithm for k-Means Clustering

The k-means clustering is a ML algorithm that classifies unlabelled datasets into k distinct clusters. [Lloyd et al. "Quantum Algorithms for Supervised and Unsupervised Machine Learning", arXiv quant-ph/1307.0411 (2013)] found a quantum nearest-centroid algorithm that only classifies vectors after the optimal clustering has been found. In the above paper the authors also developed a k-means algorithm that can be implemented on an adiabatic quantum computer. Some caveats are the need of a qRAM and the fact that the algorithm only classifies the data without finding the clusters.

Conclusions

Quantum machine learning has the potential to be one of the great, untapped application areas for quantum computation. If we look at the trends in computing technologies in the last few decades, we see a movement toward inviting computers to reason about datasets far too large and complex for any human mind, to think about an incredible range of problems faced by humanity today. This trend has been propelled by the decreasing unit cost of (classical) computational resources and the emergence of probabilistic data mining algorithms – particularly deep learning – that were qualitatively different than many of the techniques that came before them. While it may seem distant today, it is not inconceivable that the emergence of large-scale or purpose-built quantum computers will be our next step in the efficient and meaningful analysis of these massive datasets. However, this exciting future is still a few algorithmic breakthroughs (and perhaps one large-scale quantum computer) away.

Jennifer Fernick – PhD student
jkfernic@uwaterloo.ca, @enjenneer

Vlad Gheorghiu – Postdoctoral researcher
vlad.gheorghiu@uwaterloo.ca

Comments

There's a typo in the sentence, "A particularly high-value quantum subroutine that we see in the literature so far has been the application of Grover’s algorithm, which provides exponential speedup in the search of unstructured databases." Grover's algorithm provides a quadratic, not exponential speedup.

Thank you for noticing that Tom and letting us know. We have made that correction.

Very nice read Jennifer and Vlad, looking forward to more of your work in this topic!

Thanks Markos!

Excellent read. Looking forward to developments in this area. With all these papers on Quantum Machine Learning algorithms, I was wondering where we stand on implementing them. How far have we progressed in developing Quantum Rams, Memory and other Quantum Technologies to implement these algorithms. An interesting article I came across: Implementing handwriting recognition using a 4 qubit NMR test bench [http://arxiv.org/abs/1410.1054]. 

Thanks Shahnawaz!

The paper you mentioned is a very nice find! We are yet quite far away from having a scalable quantum architecture, but I am optimistic that we will eventually get there. I am also interested in a review-like paper on where the experimental part of this field stands, but I am not aware of any one. Perhaps this discussion will provide some motivation for writing one.

Nice blog. The article you have shared about the Quantum Computational Intelligence is good.This article is very useful. My friend suggest me to use this blog. I am writing thesis on the topic data mining and it's applications. Thank you for sharing.

Thanks for reading it and finding it useful!

Blog topics

  1. 2016 (21)
    1. November (1)
    2. October (1)
    3. September (2)
    4. August (3)
    5. July (2)
    6. June (2)
    7. May (2)
    8. April (2)
    9. March (3)
    10. February (2)
    11. January (1)
  2. 2015 (16)
    1. December (1)
    2. November (2)
    3. October (2)
    4. September (2)
    5. August (2)
    6. July (2)
    7. June (3)
    8. May (2)

Educational programs

QKD - Quantum Key Distribution Summer School

USEQIP - Undergraduate School on Experimental Quantum Information Processing

QCSYS - Quantum Cryptography School for Young Students

Quantum Innovators logo