Arts and AI: Keeping it human

Elon Musk warns about its dangers. Bill Gates says there’s no need to panic. Even the late Stephen Hawking was unsure about it, saying that artificial intelligence (AI) could be the biggest event in the history of our civilization — or the worst.

No doubt AI is making us ask some tough questions. Can machines make ethical decisions? Do machines think like us, or know the limits of their intelligence? Will artificial intelligence steal our jobs?

These issues are being tackled right here in Arts. To understand some of the challenges we might be facing, we reached out to our professors and alumni whose work contributes to AI.

Diversity is key for unbiased AI

Carla Fehr, associate professor, Wolfe Chair in Scientific and Technological Literacy, Department of Philosophy

Carla FehrExciting AI systems are making their way into new parts of our lives every day. They have the potential to make our lives better and improve human welfare. On the face of things AI systems seem to be unbiased and objective, as well as efficient tools because machines aren’t driven by human desires or motivated by human vices.

However, Joy Buolamwini’s research on facial recognition systems made a splash when she found that although these systems did a great job recognizing men’s and white people’s faces, they failed to recognize the faces of one in three women of colour and were barely more accurate than a coin toss when it came to identifying dark-skinned women. Buolamwini found race and sex biases in AI.

The existence of these very human biases in AI systems is dangerous. AI is being used not only in facial recognition, but also in policing, credit evaluations, and medical decision-making — all areas in which racial and gender biases are extremely harmful.

We need to take active steps to ensure that the AI we create is as ethical as we intend it to be. The challenge is that humans are bad at recognizing our own biases, and we can’t evaluate or protect against something that we don’t know exists. One reason why the facial recognition systems were better at recognizing white men’s faces is that they were likely trained using images in which white men were over-represented. It is not surprising that it was a woman of colour who noticed and investigated this problem.

A good way to recognize these kinds of biases in our own thinking and in the products we develop is to work with people who have different points of view and who are likely to have different biases than we do. Presumably no one intended to make a racist AI system, diverse AI research and development teams can help us live up to those good intentions. Diversity on our research and development teams can help us produce more effective and more ethical AI. AI is a tool made by people to do things for people. And it ends up that it matters who those people are.

 

Managing the risk of machine learning

Chris DeBrusk (BASc ’94, BA ’95), partner at Oliver Wyman

Chris DeBruskIn the last year, machine learning has taken the world by storm. While the mathematical concepts behind this form of artificial intelligence have been understood for decades, the emergence of cheap, massive computing power via the public cloud and the existence of large, comprehensive sets of data has resulted in nearly anyone being able to train and deploy machine learning models. While the opportunity this presents is vast, it has also introduced new business and societal risks that will need to be managed.

“The risk of using models that have a bias built into them is very real, and there are numerous examples of it already happening to the detriment of the people involved.”

From prison sentencing models that make inaccurate and racist predictions on re-offend rates, to chat bots that start to communicate using inappropriate language and concepts, the potential ways in which machine learning can result in negative outcomes are numerous.

What is required to avoid these types of outcomes is a two-pronged approach. The first aspect is that companies that are leveraging this technology need to adopt a comprehensive approach to internal governance and a three line of defense model to management of the risk. Just because it is easy to train and deploy a model, doesn’t mean the control frame around it shouldn’t be robust. The second is that the government regulators who oversee these companies need to incorporate an understanding of machine learning risk into their approach.

The skills necessary to manage this new technology are going to be a combination of the core mathematics and engineering talents required to effectively understand how it works, and the social sciences perspective to understand how results could differ from expectations to the detriment of customers, employees and society in general. The promise of machine learning is vast, but it will be important that we also manage the potential downside implications of the technology.


How do we prepare for the future of work?

Joel Blit, assistant professor, Department of Economics

Joel BlitI was recently asked, by a journalist writing for the Canadian School Counsellor Magazine, what jobs will still exist in twenty years. School counsellors have the difficult job of helping steer our youth towards promising careers, not knowing how those careers are likely to be affected by the coming AI and robotics revolution. They are not alone in trying to divine this future of work. Policymakers too are trying to understand how labour markets will unfold in order to begin developing appropriate policy and institutional responses. In the past several months, I was asked to speak about automation and the future of work at the G7, OECD, and IMF. It seems that everyone is preoccupied with this same question.

No one can tell the future; however, we as economists can try to understand how the labour market works, how technology has affected it in the past, and how the coming technological wave is likely to affect it in the future.

Economics tells us that these technologies are likely to manifest themselves on the job market in two principal ways. First, they could lead to unemployment, at least in the short to medium run. Some upper end estimates suggest that 47 per cent of all existing U.S. jobs could be automated in the next ten to twenty years. Autonomous vehicles alone could credibly destroy 2-3 per cent of existing U.S. jobs.

My larger concern, however, pertains to the second likely impact: economics tells us that these technologies are likely to benefit some workers while hurting others.

That is, much as the computer increased inequality through a process known as skill-biased technical change, AI and robotics have the potential to change the way that incomes are distributed across society. This could further increase inequality, perhaps to the extent of posing a challenge to our democracy. The losers will be those whose skills are in direct competition with AI and robotics. The winners will be those whose skills are complementary to these technologies. Such complementary skills include judgment, intuition, critical thinking, creativity, communication, leadership, empathy, computer skills, data science skills, and entrepreneurial skills.

So what should we be telling our kids? The following quote, which has been attributed to Hal Varian, Chief Economist at Google, nicely encapsulates the lessons from economics: “Seek to be an indispensable complement to something that’s getting cheap and plentiful.” Data, and the ability to distill insights from it using machine learning, is getting cheap and plentiful.

A further lesson from history and economics is that technological progress cannot be contained. Not that it should be, as with the right governance all can benefit from the coming technological revolution. Our challenge, then, is to ensure that the appropriate policies and institutions are in place.

Want to learn more about this topic? Read a recent policy paper about AI and the future of work by Professor Blit and co-authors. 

 

What keeps lawyers up at night?

Michael O'Brien (MA ’10), Associate in the litigation group at McCarthy Tétrault LLP

Michael O'BrienLegal issues arise with innovative technologies when they intersect with an existing area of law.

As AI is integrated into businesses and workforces, expect to see intersections in the areas of privacy, contract, intellectual property, employment, competition and tort law. These will arise whether your company is ensuring regulatory compliance, or interacting with businesses, consumers, and/or the pubic-at-large.

Companies should keep apprised of legal and regulatory developments that will impact how they build, train and deploy innovations. An example of such a development is the Privacy Commissioner's May 2018 guidelines for obtaining meaningful consent for the collection, use and disclosure of personal information.

Contract law also presents significant considerations for companies using AI. Whether contracting for the acquisition of an AI start-up, or entering into a service or licensing agreement involving AI, certain provisions may require special consideration, such as warranty, insurance or indemnity, intellectual property protection, or data-use provisions, to name a few.

With respect to workforce integration, potential issues range from defining and managing employee relationships, to ensuring that human-interfacing AI systems are properly tested for built-in bias.

At McCarthy Tétrault, we are continually exploring the implications of AI on Canadian businesses. For those interested in learning more, I recommend our 2017 white paper, “From Chatbots to Self-Driving Cars: The Legal Risks of Adopting Artificial Intelligence in Your Business”.

Some of the most interesting questions in the field are appearing on the horizon. Who owns an AI-created innovation?

“What are the implications of collusion between AI systems? How should we manage allegations of misuse or malfunction of an AI technology?”

Our challenge, as lawyers, is to help our clients predict and create the frameworks of the future. But that doesn’t keep us up night. Like everyone else, we are kept up by the robot monsters under the bed.


Why Siri and Alexa can’t follow an argument

Randy Harris, professor, Department of English Language and Literature

Randy HarrisArtificial Intelligence was strongly motivated in the early years not just to replicate intelligent human behaviour, but to understand intelligent human behaviour. AI wanted to build machines that modelled human minds. That’s what made AI so foundational in the cognitive revolution. And language, a lynch pin of intelligence, was key.

But almost every AI application that uses language, even those beloved pseudo-humans, Siri and Alexa, completely ignores this goal. One other thing about these applications: getting them to follow a simple argument, at the level of a three-year old, is utterly beyond their grasp.

They answer questions. They follow orders. They can’t engage in an argument, perhaps the most fundamental and ubiquitous social use of language.

We are working on both these deficiencies at the same time. We model crucial units of natural language according to principles of cognition. And we use these models to help machines understand arguments. These units, distilled from two and a half millennia of humanities research, are rhetorical figures. Some of them are quite famous, like metaphor. Some, not so much, like antimetabole.

Antimetabole shows up in expressions like “all for one and one for all” (Alexandre Dumas) and “Ask not what your country can do for you; ask what you can do for your country” (John F. Kennedy) and “I meant what I said and I said what I meant” (Dr. Seuss). Antimetabole is neurocognitively sticky. It activates neurocognitive dispositions, like repetition, contrast, and symmetry. Our minds are tuned to such dispositions. That’s why such expressions are so catchy.

Humans have no trouble understanding that Kennedy’s entire inaugural argument is summarized in his antimetabole, or that the three musketeers (and d’Artagnan too) operate by the code of reciprocal obligation crystalized in their antimetabole, that it explains significant plot developments and epitomizes the book&’s theme of heroic duty.

Humans have no trouble understanding the meaning of these language units, and we do so not because of statistical correlation, the way Siri or Alexa would respond to them, if they could respond to them, but because the inverse repetition of key words, across languages, embodies concepts of opposition and reciprocality. And antimetabole just scratches the surface. Our database has over four hundred distinct (but neurocognitively interrelated) rhetorical figures.

 

It's not just about what technology can do, but what it should do

Cheri Chevalier (BA ’95), Worldwide Sales Lead, Marketing Solutions at Microsoft

Sheri ChevalierThe proliferation of AI in our society, and Canada’s ability to harness the power of it, will require not only technically skilled talent but also talent with soft skills like active listening, communication and critical thinking. Organizations need talent that can help them think creatively about where and how to leverage AI to drive impact.

They need people who can understand human interaction and design, to partner with those who code, build and implement the technology.

“Arts majors are particularly well positioned to bring this balancing perspective to the table, thereby rounding out the discussion and furthering the positive impact that this technology can have.”

While AI has the potential to solve some of societies biggest challenges, it can also come with a risk of bias that is far reaching and needs to be understood. As machines assess and make ‘decisions’ about everything from the feasibility of job applicants, to whether or not individuals are high risk for insurance or medical care, fairness and diversity needs to be consciously applied. It is imperative that AI systems be inclusive, and this needs to be understood and examined as part of the overall process. In broader terms, its not just about what technology can do, but what it should do.

At Microsoft we’re excited about the opportunities that AI brings to people and its ability to help us achieve more, and we take pride in building an ethical foundation and working on collaborative research projects that address the need for transparency, accountability and fairness in AI and machine learning systems.


Top banner: Photo by Selina Vesely (GBDA '16) shows detail from artifact made by Bernie Rohde and Charlena Russell, on display as part of the 2017 Critical Media Lab exhibition =SUM(Things), featuring media and data-based projects and installations by Master of Experimental Digital Media (XDM) students, faculty, and community members.