News

Filter by:

Limit to news where the title matches:
Limit to items where the date of the news item:
Date range

TRuST Executive Committee Member, Dr. Kari Weaver, has been selected to co-lead the focus track program at the World Conference on Research Integrity 2026 to develop an international reporting standard for artificial intelligence disclosure in research. The work takes a participatory approach to the development of such an international reporting standard with consultation from relevant sector parties, of institutions, funders, publishers, governments, academies, regulators, libraries, AI industry, specialised research integrity bodies and specialised standardization bodies to contribute their perspective towards such a reporting standard. Culminating with stakeholder feedback at WCRI in Vancouver, BC, in May 2026, this work seeks to harmonize AI disclosure as a practice and cornerstone of honest, transparent AI use in research.

Link to conference site →

TRuST's Dr. Ashley Mehlenbacher attended the "Trust in Science for Policy Nexus" workshop, held in Ispra, Italy, on September 12-13, 2024, convened by the European Commission's Joint Research Centre, the International Science Council, and co-sponsored by the US National Science Foundation, explored the intricate dynamics of trust in science as it relates to policymaking. 

Read the publication →

TRuST’s Dr. Kari D. Weaver recently presented Transparent, Detailed, Ethical – An Introduction to the Artificial Intelligence Disclosure (AID) Framework.  The AID Framework tool provides a transparent, consistent, and targeted approach to attribute the use of AI in teaching and research work. AI disclosure builds a culture of academic and research integrity, enhancing trust in AI supported work across academia.  The presented workshop addressed the current state of artificial intelligence disclosure, academic integrity in relation to artificial intelligence use, and introduced the elements of the AID Framework, provided example AI disclosure statements using the AID Framework, and addressed key concerns and questions of participants.

Listen to the full webinar →

Generative artificial intelligence tools are becoming ubiquitous in applications across personal, professional and educational contexts. Similar to the rise of social media technologies, this means they are becoming an embedded part of people's lives, and individuals are using these tools for a variety of benign purposes. This article examines how existing information literacy understandings will not work for artificial intelligence literacy, and provides an example of artificial intelligence searching, demonstrating its shortcomings.

Read the full story →

TRuST's Dr. Kari D. Weaver, University of Waterloo Libraries, has published the Artificial Intelligence Disclosure (AID) Framework. The new tool recognizes the need for consistent, transparent disclosure of artificial intelligence use across learning and research contexts. The AID Framework addresses this gap by providing a structured and detailed approach to such disclosure.

Read the full story →