This section applies to units or individuals who are building, integrating, or deploying AI-enabled systems—not to general users of AI tools.
When developing or deploying an AI-enabled system (meaning a system that uses AI as part of a larger application and processes), University data, specific data governance, and security requirements apply.
Key Requirements
- All AI-enabled system projects must undergo an Information Risk Assessment (IRA).
- A Privacy Impact Assessment (PIA) may also be required, depending on the nature of the data being collected or processed.
- Approval from the appropriate Information Steward is required before using any University data—even if that data is publicly available.
- AI-enabled systems must operate within a secure, University-approved hosting environment.
- Data classification under Policy 46 must be completed before any content is incorporated into or processed by the system.
Why this matters
- Embedding University data into an AI-enabled system increases the likelihood of reuse or broader exposure, which elevates the risk of unintended disclosure.
- Publicly available data may still carry copyright, licensing, retention, or compliance obligations that must be respected.
- The University must ensure that all chatbot and AI platforms meet established security, privacy, and compliance standards.
The University of Waterloo may block or restrict access to tools that present unacceptable risks to institutional systems or data.