The Faculty of Arts acknowledges that we are on the traditional territory of the Neutral, Anishinaabeg, and Haudenosaunee peoples. The University of Waterloo is situated on the Haldimand Tract, the land promised to the Six Nations that includes six miles on each side of the Grand River. Our actions toward reconciliation take place through our research, teaching, learning, and community events, with guidance from the University’s Indigenous Initiatives Office.
SOCIO-CULTURAL AND POLITICAL IMPLICATIONS OF ARTIFICIAL INTELLIGENCE
Watch the videos of all presentations!
- Guilt, Machines, and Confession: Trust in the Digital Age
- AI, will you work?
- Mood: An Exploration of Emotion Detection Algorithms
- Explaining Bias with Music
- AI: A People’s Perspective
- Bias in the Black Box
- ArtIfact: an overview of AI policy
- Project Matthias
The collective goal of our team is the creation of an interactive art installation in the form of a confessional booth that evokes an engagement that leaves the user questioning the eerie repercussions of both blindly trusting terms and conditions and data collection, as well as their relationship with artificial intelligence as a whole.
We wish to incorporate the aesthetics of a confession booth merged with the popular representation of personified artificial intelligence to create a certain experience and mindset for the users of our installation. Our intention is to be clear that we are not focused on the confession of “digital sins,” and instead we are focused on the confession of real sins in a digital environment.
What does the future of work look like in a world of technological advancement?
Our project explores the impact of AI in the workplace, and in particular, its impact on the individual. In our exhibit, we consider the effects of AI across the organizational hierarchy, examining how individuals will be affected differently based on their role, power and level of education.
Visitors will participate in a choose-your-own-adventure style installation embellished with didactics and activities. Visitors will follow the pathway of a unique persona – a CEO, a middle manager, or a floor worker – to understand the complex nature of the implementation of AI in an industrial setting. The decisions they make along the way will impact their storyline’s outcome and their experience in the exhibit, prompting reflection about how their choices are intertwined with the lives of others.
Upon completing the adventure, visitors will be encouraged to critically engage with the impact, both positive and negative, that AI poses to our jobs and our lives. Our objective is to inspire visitors to better understand and to reflect upon the multi-faceted nature of the integration of AI into the workplace.
From scans at airport security to targeted advertising at department stores, facial recognition software is slowly creeping into our day to day lives. And now this technology can perform a variety of functions such as identifying individuals or describing emotions. But how do facial and emotional recognition algorithms work? How accurate are they? And what impacts will they have in our lives?
Project Mood is an exploration into the perception accuracy of current facial analysis algorithms where we analyze and question the factors that are in play during emotional classification. Through Mood, we aim to provide summit attendees a peek into these factors and their effects in order to demystify and inform about the inner operations of targeted machine learning, and therefore create informed popular discussion.
By going through an interactive model, attendees will be able to select different datasets as input to create a model, view diagrams that represent the inner workings of the system, and compare the output results. Alongside the model will be blurbs explaining the math and processes that lead to different outcomes between datasets, highlighting the role that biases play.
At the end, we reframe the results of the interactive model and emotional recognition algorithms to discuss the social implications of using these technologies in our society and spark popular informed discussion.
Machine learning models can have undesirable consequences on particular groups of people and society in general. We aim to highlight how these consequences are introduced by showing different sources of bias that can leak into the model development pipeline. We will highlight five kinds of bias in particular: historical, representation, measurement, aggregation, and evaluation bias. To make the different categories of bias relatable we will use a familiar example of music recommendation engines found in most modern music streaming services.
We hope to inspire machine learning practitioners to be aware of the bias in the systems they create. We aim to inform users about the bias in the systems they are using. After visiting the different stations and engaging with the interactive pieces a visitor should be able to apply the same framework of bias recognition to their own machine learning models and applications.
Will artificial intelligence save my life or destroy it?
From fear of robot takeover and worry of job loss to pure excitement, the concept of "artificial intelligence" sparks a spectrum of emotional reactions amongst everyday people.
A.I: A People's Perspective will capture what this spectrum looks like through artistic expression, inspired by the insights gathered from having meaningful conversations with everyday people on what A.I means to them, from their general understanding of it to more abstract questions on what they think A.I actually looks like.
Through this, we aim to spark deeper thought on the topic as well as open up meaningful conversation around the public's perception of artificial intelligence, to demystify and shift the narrative of what types of use A.I can have for everyday people.
We want to play on the concept of the “black box” of AI, and the concept that AI algorithms aren’t familiar or understood. We also want to force people to come to terms with how their data is being collected and from where, as well as the ways that their data is used to classify themselves and manipulated to form predictions regarding their potential interests, purchases, and actions.
The audience for this project is the average person, someone with a basic understanding of social media and technology, but not an expert or someone who is well versed in the field. This is the ideal person for this project because we hope to highlight how our data is being collected and used in ways that we may have never guessed.
Ideally, our project will leave whomever interacts with it wondering how on earth the data collected could be used by our “AI” to draw the conclusions it did, accurate or not.
The truth has become a valuable commodity. In a world of information overload and bad actors, it seems every Google search delivers two truths and a lie. How important is the truth? This card game seeks to deliver the answer.
Presented with a series of real-life scenarios players will receive a handful of policy responses to each particular scenario. One of these policy response cards will be true, and the rest will be false; construed for the purposes of leading the player away from the correct answer.
By interacting with the game, players will be faced with their own assumptions, biases, and inability to discern fact from fiction. By confronting their own lack of knowledge players will gain knowledge related to policy responses to these AI scenarios, as well as challenge themselves to think critically and realistically about policy issues related to AI.
If you have ever thought that the future would include scary robots and Artificial Intelligence that take over? If so, Project Matthias probably isn’t for you.
The designers, writers, and programmers behind Matt are excited to show you an alternative Artificial Intelligence that wants nothing more than to chat. A computer that doesn’t have the chance to get out very often, Matt is looking for a friend, because it’s been quite some time since the lights went on in his room. Frozen in time, his bedroom has spent the last twenty years waiting for visitors, and now it is open for conversation.
An installation designed to immerse a guest completely in a perspective lost to time, Project Matthias looks to test how much ‘humanity’ one can ascribe to a computer. A screen with a soul, Matt will respond to questions posed by a visitor, and ask its own, eager to continue the conversation--what else does it have to do, locked away in 1999?
Those familiar with the time will laugh and reflect on Project Matthias’ window into the past, even as Matt challenges the guest to reflect on everything that has transpired since. How could NSYNC have broken up? Why did they cancel Friends?Matt may care about these things--the only way to find out is to ask.