Please note: This PhD seminar will take place in DC 2310.
Dahlia Shehata, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Charles Clarke
Conversational prompt-engineering-based large language models (LLMs) have enabled targeted control over the output creation, enhancing versatility, adaptability and ad hoc retrieval. From another perspective, digital misinformation has reached new levels. The anonymity, availability and reach of social media offer fertile ground for rumours to propagate.
This work proposes to leverage the advancement of prompting-dependent LLMs to combat misinformation by extending the research efforts of the RumourEval task on its Twitter dataset. To the end, we employ two prompting-based LLM variants (GPT-3.5-turbo and GPT-4) to extend the two RumourEval subtasks: (1) veracity prediction, and (2) stance classification. For veracity prediction, three classifications schemes are experimented per GPT variant. Each scheme is tested in zero-, one- and few-shot settings. Our best results outperform the precedent ones by a substantial margin. For stance classification, prompting-based-approaches show comparable performance to prior results, with no improvement over finetuning methods. Rumour stance subtask is also extended beyond the original setting to allow multiclass classification. All of the generated predictions for both subtasks are equipped with confidence scores determining their trustworthiness degree according to the LLM, and post-hoc justifications for explainability and interpretability purposes. Our primary aim is AI for social good.
Keywords: Misinformation in Social Networks, Large Language Models, Prompt Engineering, Explainable AI, Generative AI, AI Applications for Social Good