Research Spotlight:

Q&A with Kate Larson about co-operative AI

Kate Larson is a professor in the Cheriton School of Computer Science at the University of Waterloo. She is affiliated with the AI group and currently holds a University Research Chair and the Pasupalak AI Fellowship.

In May, Larson and her international colleagues published a commentary in Nature about the need for cooperative artificial intelligence — beneficial AI with social understanding.

AI assistants and recommendation algorithms interact with billions of people every day, yet they have little understanding of humans. Professor Larson and her colleagues argue that AI needs social understanding and cooperative intelligence to integrate beneficially into society.

We connected with Larson over Zoom to ask her a few questions about her fascinating research and also to get her perspective on the challenges confronting women in computer science.

You use the term “methodological individualism” to describe the current paradigm of AI. What does that mean? And why has it become the dominant model?

KL: When we talk about AI, we are given the picture of an agent who interacts with their environment, sensing it and making decisions. The assumption is often that the environment is non-social. There's no notion that there are other entities in the world that the agent might have to interact with.

This individualistic model goes right back to the 1950s when AI first began to appear. It made sense at the time. These AI systems have to learn skills, communicate, make decisions—all difficult tasks. It made sense to break it down to the simplest situation, which is that of an individual learning something and not worrying about other entities.

But what we're arguing in this paper is that, now that we have many of these basic skills in place, let's begin thinking about where these systems are going to be situated. They are not going to be sitting always in isolation, only recognizing faces or processing language. Instead, there has to be interaction, so let's talk about the importance of these interactions and how they will change the way that we think about AI systems.

Can you give an example of an AI application where the individualist model fails and cooperative AI is necessary?

KL: This is going to be a little bit more speculative, but we can think about autonomous vehicles. The dream is that, at some point, we're going to have these autonomous vehicles on our roads. Currently, there's been a lot of really fascinating work on object detection, being able to figure out where the other cars are located. But even so, this is often taken from this individualistic perspective.

Let’s take a step back and think about what are we are actually going to need in our systems if we do have fleets of autonomous vehicles. While they're going to have to be able to recognize pedestrians and cyclists and other people on the road, they also have to coordinate with them. When we drive, we're coordinating with other vehicles, cyclists, and pedestrians. We have to be aware that the road is being shared. So autonomous vehicles are going to have to understand what it means to be part of this greater group and how to communicate in meaningful ways.

They also need to need some form of commitment, even in something as simple as parking. You pull up, you're blocking the traffic because you're waiting for a parking space to free up. How would an autonomous vehicle indicate that others can pass it, but not take the parking spot that the vehicle is waiting for? There has to be some form of understanding and communication and commitment there, which our systems currently don't have.

This is an inherently social problem that these AI systems are going to need to negotiate. And we're not even asking the question of how to achieve it. It's going to require different sorts of algorithm and different techniques and different objectives. We're arguing that now is the time to shift the discussion in AI.

How do you define co-operation and co-operative AI?

KL: Well, that's actually one of the challenges: what is co-operation? What we’re looking at is AI in situations where there are benefits for coordinating between different entities, but there might be different incentives in place so that cooperation is not necessarily the most natural thing to do.

We also are interested in the question of: are there uses of AI that can better support human cooperation? A translation system, for example, could be viewed as a tool that supports human cooperation. We view that as falling under this broad umbrella of cooperative AI. So both systems for helping human cooperation and systems that cooperate with different AI components or between humans and AI systems.

What are the challenges that you and others in the field face in developing a co-operative AI?

KL: Shifting the discussion to focus on these cooperative scenarios will be a multi-disciplinary project. When you think of who studies co-operation, it's not just the computer scientists. There's also a lot of insight from biology, psychology, economics, sociology, and law. The challenge is going to be one of cooperation: how do you bring these distinct fields together to begin thinking about a particular problem?  Secondly, cooperation is a vague term, so how can we make it specific enough that progress can be made? Finally, in AI, we have a long history of developing smaller games that we can use to test these ideas and sort of benchmark progress. So figuring out what the challenge problems should look like, and making them rich enough that progress can be made, is going to be very difficult. Figuring out how to gauge progress is going to be a really challenging problem.

Has the lack of diversity in STEM had a negative impact on AI research?

KL: I think this is true across all of computer science and AI. I think the lack of diversity inside AI has been a huge problem from its beginning. In the last few years, we are really seeing the results. Example after example shows that the data sets which are being used to train these systems are not representative, and nobody was in the room to ask questions about the quality of the data. There are systems are being deployed which will not work on certain subsets of the population. No one from those sorts of demographics was in the room to actually question the system. So it's really important that we have broad representation—and particularly in AI as we are seeing these applications having a significant impact on people's lives.

What has been your own experience of being a woman in computer science, a field in which women are underrepresented?

KL: When I started to do computer science in graduate school, I don't think I realized how few women were going to be in computer science. It was a bit of a shock. It's always hard to be a minority in any group. You feel that you stand out and not necessarily in a good way. Always feeling that you have to justify yourself as belonging is tiring. I think I've been quite lucky to be surrounded by really supportive mentors and advisors over the years. I’ve had both male and female mentors and role models who were very good at pushing me forward when I was thinking “oh no, I shouldn't apply for that position, or I shouldn't attempt that.” That was very helpful to me in overcoming imposter syndrome. But I am also aware that there are certain opportunities that myself or other women in the field might not have known about because we just weren't part of the right network.

What would you say to young women considering a career in STEM?

KL: That you’re absolutely capable of doing this. And that there is an opportunity to work on problems that are really interesting intellectually and also societally impactful. So there’s a really nice balance, which I find very appealing myself, and I hope others will find it appealing too.