Artificial Intelligence with human understanding
To deliver on promised benefits, AI must be built to co-operate
To deliver on promised benefits, AI must be built to co-operate
By Melanie Scott University RelationsWhether it’s a digital assistant, like Siri, predictive text, social media or a vehicle with autonomous functions, many of us use or rely on some form of Artificial Intelligence (AI) daily.
Within the next few years, AI systems will become even more complex in their interactions with people and other AI agents. We’ll see evidence of this in fields such as consumer and financial markets, law, cybersecurity and health care. Kate Larson, a professor in the David R. Cheriton School of Computer Science, and her colleagues have argued that if AI does not engage well with humans, it could fail to deliver benefits. “There are ongoing issues of ethics, regulation and privacy,” she says. “If we don’t design AI systems people can trust, there will be a backlash.”
Larson conducts research on Multiagent Systems — AI systems that interact with one another — and recently collaborated on a research paper discussing the need for co-operative AI.
Co-operative AI refers to the ability of AI agents to work alongside or with other AI agents or people. It can also refer to AI that is used to better support teamwork and collaboration.
Although it is used by billions of people daily, AI has limited understanding when it comes to interacting with humans and other agents. Early multi-agent research focused on AI that learns to beat an opponent — for example in two-player zero-sum games such as chess or Go. “This is a start, but it is not how the world works,” Larson says. “We need to look at how to change the questions and approaches we are taking when designing AI, so that we have a stronger focus on co-operative intelligence.”
Already, there is a need for AI that has social understanding and can interact with others. Autonomous vehicles, for example, are not driving in isolation. They’re driving alongside pedestrians, cyclists and other vehicles, and have human operators. We train them through simulations to show them the correct way of behaving to keep people safe.
“We’re seeing some unintended consequences of AI that doesn’t have social understanding,” Larson says, referencing social media algorithms that have contributed to the spread of misinformation. “Algorithms like these are meant to improve user engagement, but this may not be the metric we want to focus on. If we began AI design by thinking about cooperation and social factors, we might be able to better think through potential ramifications, rather than trying to reverse or fix negative societal or human impact later.”
Larson, who is a University Research Chair and was also appointed a Pasupalak AI Fellow, has always been interested in group dynamics and organizing groups of people to complete projects or take meaningful action. She is currently collaborating with other researchers on a climate change project that uses AI to predict wildfires and support their reduction and control.
She also supervises students who are interested in building and researching teams of AI agents and studying how they can learn to cooperate within their team, as well as students who look at using ideas from machine learning to learn better ways of voting.
Waterloo has a multitude of AI researchers, groups and initiatives, including the Waterloo Artificial Intelligence Institute and the Artificial Intelligence Group. Larson is a member of both groups and believes there are many opportunities available on campus for further exploration of co-operative AI and interdisciplinary collaboration on the topic.
“There has been a paradigm shift in AI in the last decade,” Larson says. “AI has moved from research and labs and has become integrated into many facets of our society.”
Larson and her colleagues believe that co-operative AI is unlikely to emerge as a by-product of AI research and needs its own field of study. They warn that progress toward socially valuable AI will be stunted unless cooperation is at the centre of the research. To successfully build and design AI models that have social understanding and can truly benefit humanity, AI researchers will need to co-operate themselves across a variety of disciplines and fields.
“If we are going to make positive advances, we will have to take an interdisciplinary approach,” Larson says. “We’ll need to work closely with those in other fields, such as psychology, philosophy, law and policy, history and sociology. It’s essential to work alongside researchers that study and understand co-operation.”
Subscribe to get updates on the latest research and innovation at the University of Waterloo. From breakthroughs in quantum technology and cybersecurity to leading-edge climate research and innovations in health care, you’ll get to know some of Waterloo’s most inspiring minds.
Smart monitoring system is designed to help caregivers identify leaks, bleeds or potential infections early
Workplace Innovation program at the GreenHouse Social Impact Incubator helps students find innovative ways to solve real world problems.
Researcher Suzanne Kearns pioneers the Waterloo Institute for Sustainable Aeronautics
The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg, and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations.