The rapid and ongoing evolution of GenAI has posed challenges to many unsupervised assessments in fully online, blended, and in-person courses. This tip sheet provides strategies to consider when reviewing and redesigning unsupervised assessments. These strategies aim to help mitigate unauthorized use of GenAI; other resources linked at the end of this sheet address ideas for integrating GenAI into assessments. We recognize that AI can complete nearly any unsupervised assessment we set: students need to be motivated to do the work themselves. Underlying all of the strategies shared is the goal of creating assessments that students value and that they recognize will help them learn.
General recommendations
- Consider how GenAI use would, or would not, impact students’ achievement of the intended learning outcomes, and prioritize redesigning those assessments at most risk. However, some unsupervised assessments that students might use GenAI to successfully complete may still be the best way to achieve the intended learning outcomes and should be left as is.
- Frame unsupervised assessments as opportunities for practice, learning, and preparation for supervised (proctored) assessments used to verify learning.
- Be explicit and transparent about your expectations regarding GenAI use in your course via the course outline, course materials, assessment instructions, and during classes. Explain your rationale for GenAI use (or not) in assessments, keeping in mind that students are likely to encounter varying GenAI guidelines across their courses. Ensure students have opportunities to ask questions and seek clarifications.
For all classes
- Prioritize redesigning assignments that AI is particularly good at completing successfully, such as fact-based recall quizzes and online discussion posts:
- To test facts, ask questions that require students to demonstrate comprehension or application of key terminology in a specific context.
-
- For online discussions, avoid prompts that read like short answer exam questions that can be easily outsourced to GenAI. Instead, use the online space to engage students in activities such as online polls and debates, deep reading and social annotation, sharing current news stories related to course content, critical analysis of processes used in problem solving, and other activities that enhance engagement and learning. Consider using educational technologies (EdTech) to promote discussion (See the Educational Technology Hub).
- Use assessments that are more AI-resistant and that offer the opportunity to verify learning through multi-stage assignments (i.e., assignments that students contribute to throughout the term), such as:
- Learning journals or portfolios that capture the learning process and in-class discussions.
- Two-staged assessments (e.g., stage 1: students submit a written draft, stage 2: short conversation [oral exam] based on the stage 1 deliverable).
- Assignments where students are not graded on their score on the assignment but rather on their reflection of where they went wrong and how they would correct their errors.
- Scaffolded assessments where the final product is broken down into several submissions throughout the term, with each building on the previous one. (e.g., essay drafts, multi-step projects, capstone projects and presentations).
- Scaffolded in-person or video assessments, such as brief (e.g., 3-minute) asynchronous poster presentations or mini presentations where students explain how they solved a problem or demonstrate how they would teach a concept to a novice learner.
- Design authentic assessments that focus on real world problems and other high impact practices (see High Impact Practices (HIPs) or Engaged Learning Practices).
- Create assignments and corresponding rubrics (e.g., the VALUE Rubrics) that prioritize human-centered skills such as critical thinking, judgement, and self-reflection:
- Define what these skills entail, and co-create with students reasons why they need to develop them for future success in the discipline and future work settings.
- Create assignments and rubrics that value connection to course material and class discussions, and personal experiences with concepts.
- Unless integral to the course’s learning outcomes, consider lowering the weight of criteria such as grammar and spelling since AI tools are embedded in many programs.
- Consider the feasibility of implementing synchronous, supervised assessments (i.e., how much class time is used for assessments, class size, format of instruction, availability of TAs, classroom set-up, etc.) for at least a portion of the final grade. Consider including must-pass, in-person supervised individual assessments (e.g., exam, assignment), and use formative learning activities and assessments that prepare students for those assessments. This approach is being used in many undergraduate courses.
Large classes
The following recommendations aim to help when redesigning unsupervised assessments in large classes:
- Use questions in auto-graded quizzes that commonly used AI models tend to answer incorrectly or that cannot be answered without further direction by the student.
- For some assessments, consider using alternate grading models such as specification grading, with simpler standards-based grading systems and opportunities to resubmit.
- Leverage group work and peer feedback or peer instruction where appropriate.
Small classes
The following recommendations aim to help when redesigning unsupervised assessments in small lecture-style and seminar-style courses:
- Replace some written deliverables in multi-stage projects with a video-recorded presentation, mock workshop, demonstration, or other non-written deliverable.
- Consider designing oral exams if resources and support allow. Note that while oral exams tend to be labour-intensive to conduct, the use of rubrics can help reduce time spent on grading, since grading can be completed by the end of the oral exam.
- Add synchronous graded group learning activities (other than LEARN discussions) that promote social learning and engagement (see Group Work in the Classroom: Small Group Tasks).
Support
If you would like support applying these tips to your own teaching, staff members from the integrated teaching support unit (formerly the Centre for Extended Learning and the Centre for Teaching Excellence) are here to help. For further support view the AVPA’s GenAI support page to find the most relevant staff member to contact.
Resources
- Integrating Generative Artificial Intelligence (GenAI) in Assessments
- Conversations with Students about Generative Artificial Intelligence (GenAI) Tools
- Guide to Assessment in the Generative AI Era
- Group Work in the Classroom: Small Group Tasks
- UW Course Outline Suggestions for Generative Artificial Intelligence
- Rubrics: Useful Assessment Tool
References
- Corbin, T., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment. Assessment & Evaluation in Higher Education.
- Corbin, T., Dawson, P., & Liu, D. (2025). Talk is cheap: why structural assessment changes are needed for a time of GenAI. Assessment & Evaluation in Higher Education, 50(7), 1087–1097.
- Perkins, M., Furze, L., Roe, J., & McVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice (JUTLP), Vol. 21 No. 06
- Xia, Q., Weng, X., Ouyang, F. et al. (2024). A scoping review on how generative artificial intelligence transforms assessment in higher education. International Journal of Educational Technology in Higher Education, 21(40).
This Creative Commons license lets others remix, tweak, and build upon our work non-commercially, as long as they credit us and indicate if changes were made. Use this citation format: Redesigning Unsupervised Assessments in the GenAI Era. integrated teaching support unit, University of Waterloo.