Do you remember the excitement of looking through hundreds of records in a record store and finding the one hidden gem? Your favourite band or something new that you can’t wait to unwrap and listen to? Now we have Spotify. It’s cheap, convenient and provides an almost unlimited amount of music any time you have your phone on you. Which is obviously always. 

Isn’t that better? Instant access to anything you want to listen to. It even provides suggestions for music you might be interested in based on your preferences. It may not actually be as simple as that. Systems like Spotify are based on machine learning models, and can therefore “have undesirable consequences on particular groups of people and society in general.” GES Program team “Explaining Bias with Music” aimed to “inform users about the bias in the systems they’re using” and “inspire machine learning practitioners to be aware of the bias in the systems they create.” 

Explaining Bias with Music TeamVisitors to the GES Program 2019 Summit on the “Socio-Cultural and Political Implications of Artificial Intelligence” were impressed with how this team showed how easily bias can creep into AI, even when not initially recognized.

“There are all kinds of representation problems and subtle but insidious biases that machines learn over time. The bias and music student project that tried to help people recognize difference types of bias using music is a very good step to fixing one of the problems of AI; recognition. The consequences of allowing AI to produce biased results can be far reaching and life altering and often people are unaware of the biased results that AI is capable of producing.

Technology, AI and the digital are now a deeply embedded part of most lives, the problem I see is many people do not stop to think about the power of these tools, the power they in return are giving them and the consequences for yielding tools that we, as humans, do not think to understand. As well as the problem of representation within the tech industry itself, which is producing these tools.” (“The Dichotomies of AI: Thoughts from the Global Engagement Summit on the Socio-Cultural and Political Implications of Artificial Intelligence”,Carleigh Cartmell, PHD Candidate, Balsillie School of International Affairs) 

Another excellent interactive exhibition at this year’s GES Program Summit was “Mood: An Exploration of Emotion Detection Algorithms.” This team explored the implications of machine learning through facial recognition. 

Project Mood Poster “From scans at airport security to targeted advertising at department stores, facial recognition software is slowly creeping into our day to day lives. Project Mood is an exploration into the perception accuracy of current facial analysis algorithms where we analyze and question the factors that are in play during emotional classification.” 

This project provided “a peek into these factors and their effects in order to demystify and inform about the inner operations of targeted machine learning.” It highlighted the role biases play in this process and their ramifications.

There are far reaching  social and political implications with facial recognition. The more we as a global society know about the inner workings of this new technology the better we will be able to make informed decisions about its use. Many cities around the world are currently debating the use of facial recognition on citizens. New York apartment complexes, public spaces in London and racial profiling in Beijing

How do you feel about this data collection? How will the inherent biases in this technology affect you personally and all of us on a global scale?  

To find out more about biases in machine learning watch the videos of the excellent presentations by teams “Explaining Bias with Music” and “Mood: An Exploration of Emotion Detection Algorithms.”