Researchers in artificial intelligence have developed an innovative way to identify a range of anti-social behaviour online. The new technique, led by Alex Parmentier, a master’s student at Waterloo’s David R. Cheriton School of Computer Science, detects anti-social behaviour by examining the reaction to a post among members of an online forum rather than examining features of the original post itself.
The spectrum of anti-social behaviour is broad and current artificial intelligence models are unable to form an understanding of what is being posted, Alex explained.
“Often, when attempting to detect unwanted behaviour on social media, the text of a user’s comment is examined. Certain forms of anti-social speech — for example, bullying via sarcastic mockery — are difficult for AI systems to detect, but they’re not missed by human members of the online community. If you look at their reactions, you may have a better chance of detecting hate speech, profanity and online bullying.”