Master’s student Alex Parmentier develops novel method to detect anti-social behaviour online

Friday, January 3, 2020

Researchers in artificial intelligence have developed an innovative way to identify a range of anti-social behaviour online. The new technique, led by Alex Parmentier, a master’s student at Waterloo’s David R. Cheriton School of Computer Science, detects anti-social behaviour by examining the reaction to a post among members of an online forum rather than examining features of the original post itself.

The spectrum of anti-social behaviour is broad and current artificial intelligence models are unable to form an understanding of what is being posted, Alex explained.

photo of Alex Parmentier

Alex Parmentier is pursuing a Master of Mathematics in Computer Science. His research is focused primarily on applications of AI and machine learning towards the understanding and direction of the flow of information online. In particular, he is developing models that leverage trust relationships between users to create better content recommendations and detect malicious content and misinformation.

“Often, when attempting to detect unwanted behaviour on social media, the text of a user’s comment is examined. Certain forms of anti-social speech — for example, bullying via sarcastic mockery — are difficult for AI systems to detect, but they’re not missed by human members of the online community. If you look at their reactions, you may have a better chance of detecting hate speech, profanity and online bullying.” 

Actions and comments that are disruptive to others can provide strong clues as to what is deemed anti-social. In fact, some researchers define anti-social behaviour precisely as behaviour that is disruptive to a community. Using this approach, an online community’s reaction to comments could be a key factor in understanding and detecting what is anti-social. 

To develop the method, Alex first downloaded more than six million comments from Reddit, a popular social news-sharing site that has an active culture of commenting and discussion. Reddit comments are threaded — organized in a tree-like format — so it’s easy to classify an original comment from those that are in response to it. 

In this study, a parent comment was defined as any comment that has been replied to directly. A child comment is a direct reply to another comment, and descendants of any parent are all the comments in the subtree that are rooted at the parent.

figure depicting parent, child and descendant comments

This illustration shows an example of a parent comment, its children and their descendants.

Reddit has multiple “subreddits” — discussion forums within Reddit that cater to specific interests. The data set analyzed for this study contained only comments that were posted during January 2016 on the following subreddits — /r/politics, a subreddit for political discussion, /r/movies, a subreddit where films are discussed, /r/worldnews that hosts discussion about current events, /r/AskReddit, a subReddit where people pose questions to the Reddit community, and /r/IAmA, a subreddit in which members can ask a particular person anything. 

Alex trained models on words and phrases extracted from responses to comments within those subreddits to try to predict whether a parent comment that prompted a reaction in children coments and other descendants was likely to contain hate speech, profanity and negative sentiment. 

“Prediction accuracy ranged from a high of 79% to a low of 58% — so it was a bit of a mixed bag,” Alex said. “Often, we found it was difficult to improve these scores by tinkering with classifiers or parameters, perhaps indicating that our feature collection procedures or the amount of data collected was insufficient to support certain prediction tasks. That said, we found that the best-faring tasks involved predicting the presence hate speech in a parent comment.”

While these results show that establishing highly reliable classifiers is still somewhat elusive, they also indicate that this direction of AI research has much promise. 

“Our model can be used to easily figure out the users who are provoking responses from an online community that look like they’re harming other users and either give their comments less exposure or flag them for other users to be aware,” Alex said. “By learning how a community reacts to currently detectable anti-social behaviour, we can begin to learn what a typical reaction to anti-social behaviour looks like, which can allow for the detection of more complex anti-social behaviour involving sarcasm and unfamiliar references in someone’s comment based on the reactions of others.”

  1. 2024 (28)
    1. April (6)
    2. March (13)
    3. February (1)
    4. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (51)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)