There’s a new, faster way to train AI
Less-than-one-shot learning can train AI much faster than previous methods.
Less-than-one-shot learning can train AI much faster than previous methods.
By Media RelationsResearchers have developed a new method to make machine learning more efficient.
The technique, called “less-than-one-shot learning,” can train an AI model to accurately identify more objects than the number it was trained on – a huge shift from the expensive, time-consuming process that currently requires thousands of examples of one object for an accurate identification.
“More efficient machine learning and deep learning models mean that AI can learn faster, are potentially smaller, and are lighter and easier to deploy,” said Ilia Sucholutsky, a PhD candidate at the University of Waterloo’s Department of Statistics and Actuarial Science and lead author on the study introducing the method. “These improvements will unlock the ability to use deep learning in settings where we previously couldn’t because it was just too expensive or impossible to collect more data.
“As machine-learning models start to appear on Internet-of-Things devices and phones, we need to be able to train them more efficiently, and ‘less-than-one-shot learning’ could make this possible,” Sucholutsky said.
The “less-than-one-shot learning” technique builds on previous work Sucholutsky and his supervisor, Professor Matthias Schonlau, have done on soft labels and data distillation. In that work, the researchers managed to train a neural network to classify handwritten images of all 10 digits using only five “carefully designed soft-label examples,” which is less than one example per digit.
In developing the new technique, the researchers used the “k-nearest neighbours” (k-NN) machine learning algorithm, which classifies data based on the thing to which it’s most similar. They used k-NN because it makes the analysis more tractable, but “less-than-one-shot learning” can be used with any classification algorithm.
Using ‘k-nearest neighbours,’ the researchers show that it’s possible for machine learning models to learn to discern objects even when there’s not enough data for certain classes. In theory, it can work for any classification task.
“Something that looked absolutely impossible at the outset indeed has been shown to be possible,” said Schonlau, of Waterloo’s Department of Statistics and Actuarial Science. “This is absolutely astounding because a model can learn more classes than you have examples of, and it’s due to this soft label. So, now that we have shown that it’s possible, work can begin to figure out all the applications.”
A paper detailing the new technique, Less Than One-Shot Learning: Learning N Classes From M<N Samples, authored by Waterloo’s Faculty of Mathematics researchers Sucholutsky and Schonlau, is under review at one of the major AI conferences.

Dr. Chris Bauch, a professor of Applied Mathematics at the University of Waterloo, is part of a team that has developed a new approach to help public health officials predict where outbreaks might occur. (Elisabetta Paiano/University of Waterloo)
Read more
New research demonstrates that vaccine skepticism on social media can predict public health crises

Read more
Waterloo researcher Cameron Seth is breaking down the world’s hardest computer science problem piece by piece

Read more
New study shows updated 2024–2025 vaccines remain effective against severe outcomes
The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg, and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations.