Psychology researchers led an international study that proposes approaches to train large language models in wise reasoning as well as measure the wisdom of AI.
The research is the first to suggest realistic ways to integrate wisdom into artificial intelligence, to create AI systems that will be more robust, transparent, cooperative and safe.
Researchers from Waterloo Psychology led the team, which includes experts in computer science and engineering. Their paper proposes ways to train large language models to be wiser, explore new architectures that could support wise reasoning, and suggest benchmarks to measure AI wisdom.
The timing of the work is critical because, as AI capabilities race ahead, wisdom isn’t keeping pace, raising safety and reliability concerns.
“Artificial intelligence is getting smarter every day, but one important human skill it lacks is wisdom,” said Dr. Sam Johnson, professor of psychology at Waterloo and co-lead author of the study. “Wisdom isn’t just about knowledge or intelligence. It’s about the mental skills needed to handle life’s challenges, such as making difficult decisions or navigating unpredictable social situations.”
Whereas current AI systems excel at well-defined tasks, they struggle when problems are messy or unclear, because they lack the full range of strategies that humans use to navigate uncertainty, according to the researchers. The reason: these AI systems lack the full toolkit humans use to handle uncertainty. The new approach focuses on teaching AI to think about its own thinking or metacognition — recognizing the limits of its knowledge, adjusting to different contexts, weighing multiple viewpoints, and staying flexible to how situations might unfold.
Read the full media release in Waterloo News.
If the smartest person in the world were a toddler, we still wouldn’t hand them the nuclear codes. AI is increasingly resembling a child genius, still needing a healthy dose of wisdom from its human parents.