Truth and Lies in AI

We all know about “fake news.” Most of us think we can tell the difference. After all, we keep informed, or at least try to. But what are our sources of information? How reliable are they? If the internet is involved, you already know that a high dose of skepticism is necessary. So how can we know for sure that we’re getting the truth? I suppose we can’t, but we can make an effort to check sources of information and utilize critical thinking, especially when it comes to important topics such as government policy.

Do you know all of our government policies on Artificial Intelligence? Of course not. Global policies on AI? Even less. Yet this is something that has an increasing impact on our lives and society. GES Program student team “ArtIfact: an overview of AI policy” discovered that there is a knowledge gap between technology creators and policy makers and even more so a gap between policy makers and the general public. The impacts of AI should no longer be underestimated. We need to be better informed going forward.

To that effect, this team created the game “Artifacts: the game of truth, lies and the intelligence to tell the difference.” ArtIfacts interactive exhibit

At this interactive exhibit during the 2019 GES Program Summit, participants chose a card describing an AI policy topic. They were then presented with three additional cards, one containing the true policy on the issue and two lies. More than 50% of players could pick out the truth when it came to hot topics, such as autonomous vehicles and drones. But only 17% were informed on more obscure topics, such as data privacy. 

These results highlighted four areas of focus for team “Artifact.” Awareness of government policies, confusion about policies, motivation to look into issues and involvement in advocating for important causes. 

How can we work on being better informed? One answer would be to take a step back from social media. At least according to Jaron Lanier, Virtual Reality pioneer and prominent figure in the Tech community. At his recent lecture for the GES Program on the “Socio-Cultural and Political Implications of Artificial Intelligence,” Lanier discussed how social media is not only feeding us false information, but collecting our data for their use. Basically, we’ve become the guinea pigs, the product, the pawns to be manipulated for gain. One great example would be the recent controversy about Russian influence on the US election trough Facebook.

Besides the obvious privacy and data ownership concerns, what really happens to the information we share? 

What happens is that any information you post online can and will be used to profile and categorize you. That includes targeted advertising from companies, as well as government analysis, predicting your views on policies and which way you’re likely to vote. 

Bias in the Black Box interactive exhibitAnother excellent GES Program student team designed an interactive exhibit to show exactly that. “Bias in the Black Box” asked visitors seemingly random questions, such as “Did you wear a uniform to school?” and “Where do you shop for groceries?” to create a profile and predict the voting behaviour of the person. Turns out it’s pretty straight forward to categorize people and fairly accurately predict their behaviour. This becomes problematic when used with AI. Facebook, for example, may target information to your news feed, reinforcing your views, or even influencing your behaviour. 

To learn more about truth and lies in AI, watch the videos of the excellent student presentations as well as our 2019 Summit Roundtable discussion, featuring four experts in the field, who, among other topics, discussed privacy policies and data ownership.