Responsible AI Use

'Provisional' Principles for Responsible AI Use

These principles guide how we use generative AI tools in our learning, research, and administrative work. They are not rigid rules, but shared commitments that help us use AI responsibly, creatively, and in ways that reflect the University of Waterloo’s unique mission.

Each principle is grounded in the University of Waterloo's core values, including Act with PurposeWork Together, and Think Differently. As we explore new opportunities with AI, these values help us stay aligned with what matters most: people, integrity, and innovation.

Purpose first

We use AI deliberately to enhance learning, research, and campus life, not simply because it’s available.

People in control

AI supports human judgment. We remain accountable for outcomes.

Empowering our community with shared learning

We equip students, staff, and faculty members with the skills to engage with AI confidently and critically.

Courage to experiment

We explore, learn from mistakes, and iterate quickly, sparking fresh ideas and new solutions.

Responsible, ethical, and inclusive use

We proactively consider the responsible and ethical use of AI, including energy use and climate impacts, protecting privacy, respecting data rights, compliance with laws and policies, and actively checking for bias, working to make AI safe and fair.