AI-augmented tools create a perceived loss of “human touch” in research writing
The use of Generative AI (GenAI) in research writing is increasing rapidly. In their paper, The Great AI Witch Hunt: Reviewers’ Perception and (Mis)Conception of Generative AI in Research Writing, researchers at the Stratford School studied the impact of AI-augmented writing on peer reviews, a formal part of academic research validation. The researchers found that AI-augmented writing improved readability, language diversity, and informativeness; it also often lacked research details and reflective insights from authors in their samples of writing from top-tier Human-Computer Interaction (HCI) conferences.
AI-augmented tools may include tasks ranging from text-improvement suggestions to speech-to-text translation, crafting initial drafts, facilitating brainstorming and introducing new research questions. However, concerns about transparency, academic integrity, and maintaining the credibility of research work have emerged.
At its core, the researchers advocate that the quality of the research itself should remain a priority in reviews and emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI tools.
The authors suggest that “responsible and transparent use of GenAI can enhance research presentation quality without negatively impacting reviewers’ perceptions.” The authors advocate for reviewer guidelines that promote impartial evaluation of submissions, regardless of any personal biases towards GenAI.
“The Great AI Witch Hunt” offers further insight into how Artificial Intelligence continues to affect many areas, including how academic papers are evaluated.
The paper was authored by Hilda Hadan, Derrick M. Wang, Reza Hadi Magovi, Joseph Tu, Leah Zhang-Kennedy, and Lennart Nacke.
Read the full article, The Great AI Witch Hunt: Reviewers’ Perception and (Mis)Conception of Generative AI in Research Writing.