Measuring AI's ability to learn is difficult

Organizations looking to benefit from the artificial intelligence (AI) revolution should be cautious about putting all their eggs in one basket, a study from the University of Waterloo has found.

A group of researchers found that there is no precise method for deciding whether a given problem may be successfully solved by machine learning tools.

“We have to proceed with caution,” said Shai Ben-David, lead author of the study and a professor in Waterloo’s School of Computer Science. “There is a big trend of tools that are very successful, but nobody understands why they are successful, and nobody can provide guarantees that they will continue to be successful.
 
“In situations where just a yes or no answer is required, we know exactly what can or cannot be done by machine learning algorithms. However, when it comes to more general setups, we can’t distinguish learnable from un-learnable tasks.”

Ben-David and his colleagues considered a learning model called estimating the maximum (EMX), which is a system that captures many common machine learning tasks. Through this learning model, they found that for some tasks there would be no mathematical method that would ever be able to tell whether or not an AI-based tool could handle that specific tasks or not.

“This finding comes as a surprise to the research community since it has long been believed that once a precise description of a task is provided, it can then be determined whether machine learning algorithms will be able to learn and carry out that task,” said Ben-David.