Grounding AI in humanity

Dr. Lai-Tze Fan on stage presenting

Designing AI that supports, not steers us.

AI is evolving healthcare, work and daily life—but people ultimately shape AI. We decide how it supports us and can push AI to be a force for good. By uncovering hidden bias, Waterloo can make AI more fair, trustworthy and built to improve everyday life—helping people and amplifying our potential. Read more.

5 steps to check AI fairness  AI can reflect human bias, so testing is key. 
Number 1 overlayed on yellow circle

Audit data for missing voices.

Number 2 overlayed on yellow circle

Use fairness metrics to measure outcomes.

Number 3 overlayed on yellow circle

Run stress tests to catch hidden flaws.

Number 4 overlayed on yellow circle

Involve humans to add context.

Number 5 overlayed on yellow circle

Analyze results across groups.

These steps help design AI that’s more accurate, fair and inclusive. (Gender Shades; IBM AI Fairness 360; Microsoft Responsible AI; FATE)

I like to think of AI as something potentially inspiring – as a way forward not just for technology but also for how we work, for social interaction and for creativity…

Dr. Lai-Tze Fan

Dr. Lai-Tze Fan

Dr. Lai-Tze Fan is the Canada Research Chair in Technology and Social Change in the Faculty of Arts. She studies how artificial intelligence can perpetuate bias and develops methods to promote fairness, equity and inclusion in everyday technologies.  

Explore more ways Waterloo is on it.

Catching concussions fast →

Learn how a Waterloo startup is catching concussions faster with a new saliva test that gives results right on the sidelines.

Detecting cancer with AI →

Explore how this Waterloo researcher is developing AI that enables smarter, faster decisions during surgery.

Boost your knowledge. Subscribe to rich content. 

Quick knowledge, big insights! Subscribe to the Innovation Insider for health, climate, tech, physics and human behaviour innovations in under two minutes.

* indicates required