Safeguards for working with AI tools

  1. Secure Accounts and Authentication: Use dedicated accounts for University work. Do not reuse your WatIAM password in external tools. Enable multi-factor authentication (MFA) and use strong, unique passwords. A password manager is recommended.
  2. Classify Before You Paste: Know your data type—Public, Confidential, Restricted, or Highly Restricted—using Policy 46 before entering it into any AI tool.
  3. Use University Data Only in Approved Tools: University data may be used only in tools that have been reviewed and approved.
  4. Confirm Requirements for Public Data: Publicly available information (including content on uwaterloo.ca) may still have copyright, licensing, or FOI considerations. Confirm data classification, obtain Information Steward approval, and complete an IRA before processing it in AI tools.
  5. Avoid Unofficial or Hidden Use of AI Tools: Do not use unreviewed or problematic AI tools with University data. These tools may store, reuse, or expose information without safeguards. Only experiment with personal or non-University test content.
  6. Share only what’s needed: Include just the essential information. Remove names, identifiers, or any details that could reveal someone’s identity. When examples are required, use de-identified or made-up scenarios instead of real ones.
  7. Check Privacy Settings: Review and modify privacy controls before using any AI tool. Limit data sharing, history retention, and model training. Understand what the tool stores and discloses by default.
  8. Delete Unnecessary Data: Clear chat histories or stored content regularly and avoid keeping information outside University systems longer than necessary.
  9. Consider Self-Hosting for Greater Control: When advanced functionality is needed, self-hosted AI tools may offer more control. Ensure any self-hosted solution meets University security standards and undergoes IST review.