Cheriton researchers find that large language models validate misinformation
New research into large language models shows that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation.
In a recent study, researchers at the Cheriton School of Computer Science systematically tested an early version of ChatGPT’s understanding of statements in six categories: facts, conspiracies, controversies, misconceptions, stereotypes, and fiction. This was part of the researchers’ efforts to investigate human-technology interactions and explore how to mitigate risks.