Generative artificial intelligence (GenAI) tools use predictive algorithms to produce “human quality” responses to users’ questions or commands. The continual emergence of a new generation of these technologies, such as ChatGPT 4o and Gemini 2.5 (at the time of writing this), has led to concerns that students might use these tools in ways that compromise their learning. However, banning these tools without clear communication of intent with students, or attempting to invent ways to detect their use, is neither practical nor conducive to fostering a positive attitude toward learning. Instead, the most effective strategies are to work with students by discussing these tools in the context of 1.) academic integrity and 2.) AI literacy. These strategies can help instructors use GenAI to support, deepen, and extend student learning.
Academic Integrity
It’s important to have conversations with your students to help them understand that it’s a serious violation of the University of Waterloo’s Student Discipline policy and academic honesty for anyone to submit work for assessment that was fully or partially created by another individual or by a third-party service (such as an AI platform) as if it were their own. Talking with our students about why this is a serious breach in your discipline is a worthwhile investment of course time (McCabe et al., 2012; Gottardello & Karabag, 2022). Indeed, research on academic integrity in higher education suggests that learners don’t generally come to us intending to cheat (Murdock & Anderman, 2010; Stone, 2023). Rather, what sways them toward or away from academic integrity are the microcultures they encounter when they get to college or university, and the conversations they have with their peers and instructors. A lack of an explicitly written course policy about the use of AI can also put students in uncertain situations. While some may take the lack of a stated course policy as an inherent ban, others might interpret it as completely open use. A clearly written statement in your course outline is particularly important when the GenAI policies are set by the individual instructor instead of at the institutional or faculty level.
To open conversations with students about the value of doing one’s own work or how one’s discipline frames research questions, consider the following approaches:
-
Ask your students what is meant by responsible authorship, responsible conduct of research, ethical uses of technology like spell check, grammar check, autocomplete, and AI-generated text, code, or images. After actively listening to your students, offer your perspectives based on your own experience and learning trajectory with AI. Then, share Canada’s Tri-Agency Responsible Conduct of Research framework with them and ask them to use that framework to appraise their collective assumptions about academic integrity.
-
Share with your students statements regarding academic integrity from publishers such as Elsevier. Discuss what authorship is, what insight and analysis mean, and what “original work” means in your discipline’s context. Ask your students to explain what it feels like to learn as a result of their own efforts versus solely relying on GenAI tools.
AI Literacy
As course instructors and disciplinary experts, you are well-equipped to critically assess the accuracy of GenAI-generated content within your field. Conversely, your students are still developing foundational knowledge and likely don’t yet have the expertise needed to evaluate such output reliably by themselves. There are many aspects to AI literacy, but some of the key elements to discuss with students are:
-
Not all models are created equally; different GenAI models are designed with distinct strengths. For example, Gemini tends to perform well in areas like coding and complex mathematical problem-solving, Perplexity is particularly useful for research contexts, and DALL-E is optimized for generating images from text prompts. Understanding each model's capabilities and limitations is key to selecting the right tool for the task.
-
All GenAI tools can present incorrect information. Regardless of the model or platform, GenAI can produce responses that sound authoritative but are factually inaccurate or misleading. Anthropomorphism, the act of assigning human traits to non-human things, can be problematic as students may start to develop an inaccurate mental model that ascribes emotions and trust to AI (Tassis, 2025). Since GenAI tools are designed to simulate human interaction, you play an important role in encouraging students to think critically about AI. Emphasize to students how important it is follow up on referenced material, consolidate information with other sources (such as lectures), and discuss ideas with others.
-
GenAI models are trained on existing data, which can embed and amplify bias. As these tools learn from datasets collected from the internet and other sources, they inevitably absorb the biases, assumptions, and dominant perspectives present in those materials (Ferrara, 2023; Zhai et al., 2024). This can result in outputs that reflect stereotypes, marginalize underrepresented voices, or reinforce the status quo. Additionally, AI generated responses can often lack diversity in perspective resulting in a homogenization of ideas (Anderson et al., 2024; Zhai et al., 2024). This can be particularly concerning in educational contexts, where exposure to multiple viewpoints and critical engagement with diverse sources are essential to deeper learning.
-
While the long-term cognitive impacts of GenAI use are still being studied, concerns about cognitive offloading (relying on tools to perform mental tasks that would otherwise strengthen memory, reasoning, and problem-solving skills) are valid. As learners increasingly turn to GenAI for answers, explanations, and even idea generation, there's a growing risk of cognitive offloading (Zhai et al., 2024). AI can undoubtedly support learning when used thoughtfully, but overdependence may hinder the development of critical thinking, creativity, and independent inquiry over time, especially in younger users (Gerlich, 2025). Encouraging intentional and reflective use of AI is essential to mitigating these risks.
To open conversations with students about knowledge creation, consider the following:
-
Ensure students know that the content produced by GenAI tools is merely a remixing of pre-existing knowledge and therefore will not be original; moreover, it will also replicate the bias of the content that the GenAI tools are drawing from.
-
Put an actual assignment prompt through a GenAI tool and then ask students to identify shortcomings in the resulting text, image, or code and how they would improve, extend, or deepen it. This activity can open a conversation about more sophisticated levels of learning than recall or description, such as disciplinary approaches to application, analysis, and creation.
Understanding the impact of GenAI within and across disciplines is essential. GenAI is not only reshaping how knowledge is produced and applied within individual fields but also making it easier to access and integrate insights from other disciplines. For example, a biology student can generate code to model ecological systems without formal programming training. As educators, we should continually adapt both what we teach and how we teach to stay aligned with the evolving nature of our disciplines.
At the same time, it is vital to recognize that not everything should be outsourced to AI. There remain core disciplinary foundations that students must internalize without relying on AI. As an expert in your field, it is your role to help students understand these distinctions: when and how AI can be a valuable aid, and where foundational knowledge and human judgment remain irreplaceable.
Note: The GenAI tools mentioned in this Teaching Tip are provided as examples and should not be interpreted as endorsements. At the University of Waterloo, Microsoft Copilot Chat is the only Information Systems & Technology (IST) recommended GenAI tool.
Resources
-
Artificial Intelligence and ChatGPT (Academic Integrity)
References
-
Anderson, B. R., Shah, J. H., and Kreminski, M. (2024). Homogenization effects of large language models on human creative ideation. [Proceedings of the 16th Conference on Creativity & Cognition].
-
Donald, J. (2002). Learning to Think: Disciplinary Perspectives. Jossey-Bass.
-
Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. First Monday, 28(11).
-
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1).
-
Gottardello, D. & Karabag, S.F. (2020). A comparative study. Studies in Higher Education, 47(3), 526–544.
-
McCabe, D. et al (2012). Cheating in College: Why students do it and what educators can do about it. Baltimore: Johns Hopkins UP
-
Murdock, T. B. & Anderman, E. M. (2010). Motivational perspectives on student cheating: Toward an integrated model of academic dishonesty. Educational Psychologist, 41(3), 129-145.
-
Stone, A. (2023). Student perceptions of academic integrity: A qualitative study of understanding, consequences, and impact. Journal of Academic Ethics, 21, 357-375.
-
Tassis, A. (2025, June 9). Unpacking anthropomorphism: How we humanize AI and what it means for education [Conference session]. Teaching with AI Conference, University of Guelph.
-
Willison, J. et al (2019, 2006). Research Skills Development Framework. Adelaide University. Accessed Mar 9 2023
-
Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systemic review. Smart Learning Environments, 11(1), 28.
This Creative Commons license lets others remix, tweak, and build upon our work non-commercially, as long as they credit us and indicate if changes were made. Use this citation format: Conversations with Students about Generative Artificial Intelligence (GenAI) Tools. Centre for Teaching Excellence, University of Waterloo.