Conversations with Students about Generative Artificial Intelligence Tools

Generative artificial intelligence (AI) tools use predictive text algorithms to produce “human quality” responses to users’ questions or commands. The recent emergence of a new generation of these technologies, such as ChatGPT and DALL-E2, has led to concerns that students might use these tools in ways that compromise academic integrity. However, banning these tools, or attempting to invent ways to detect their use, is neither practical nor conducive to fostering a postive attitude toward learning. Instead, the most effective strategy is to work with students by discussing these tools in the context of 1.) academic integrity and 2.) the surfacing of disciplinary ways of knowing. These strategies can help instructors use generative AI to support, deepen, and extend student learning.  

First, it’s important to have conversations with your students to help them understand that it’s a serious violation of the University of Waterloo’s Student Discipline policy specifically, and of academic honesty generally, for any one to submit work for assessment that was fully or partially created by another individual or by a third-party service (such as an AI platform) as if it were their own. Talking with our students about why this is a serious breach in your discipline is a worthwhile investment of course time (McCabe et al, 2012; Gottardello and Karabag, 2022). Indeed, research on academic integrity in higher education suggests that learners don’t generally come to us intending to cheat. Rather, what sways them toward or away from academic integrity are the microcultures they encounter when they get to college or university, and the conversations they have with their peers and instructors.  

Second, it can also be helpful to make more explicit the assumptions, models, and paradigms in our disciplines. Background texts like Learning to Think: Disciplinary Perspectives (available in CTE’s Library in EC3) may be helpful in this regard, as would many methodology journals or textbooks in your specific field.  

To open conversations with students about the value of doing one’s own work, collaborating ethically and equitably, and how one’s discipline frames research questions or knowledge creation, consider the following approaches: 

  • Ask your students what is meant by responsible authorship, responsible conduct of research, ethical uses of technology like spell check, grammar check, autocomplete, and AI-generated text, code, or images. Then, share Canada’s Tri-Agency Responsible Conduct of Research framework with them and ask them to use that framework to appraise their collective assumptions about academic integrity.  

  • Share with your students statements regarding academic integrity from publishers such as Elsevier. Discuss what authorship is, what insight and analysis mean, and what “original work” means in your discipline’s context. Ask your students to explain what it feels like to learn as a result of their own efforts versus taking shortcuts.  

  • Ensure students know that the content generated by AI tools is merely a remixing of pre-existing knowledge and therefore will not be original; moreover, it will also replicate the bias of the content that the AI tools are drawing from.  

  • Put an actual assignment prompt through a generative AI tool and then ask students to identify shortcomings in the resulting text, image, or code and how they would improve, extend, or deepen it. This activity can open a conversation about more sophisticated levels of learning than recall or description, such as disciplinary approaches to application, analysis, and creation.  

Some instructors may allow or even require students to use generative AI services as a stage in the development of an assignment. In such cases, instructors will need to clearly and explicitly define the parameters of what is permitted, required, or prohibited, especially as those parameters may differ from other instructors. The University of Adelaide’s Research Skills Development Framework (Willison et al, 2019) may also prove helpful to instructors and TAs wishing to integrate generative AI in the research process.  

Overall, given that these tools are here to stay and will keep advancing, we recommend having frank conversations about the sometimes tacit aspects of our disciplines and academic integrity, rather than inadvertantly creating (through detection and surveillance tools) an environment wherein learners are mistrusted from the outset.