PhD student Liam Hebert awarded Vanier Canada Graduate Scholarship

Thursday, June 6, 2024

Liam Hebert, a PhD candidate at the Cheriton School of Computer Science, has been awarded a prestigious Vanier Canada Graduate Scholarship. Co-advised by Professors Robin Cohen at the Cheriton School of Computer Science and Lukasz Golab in the Department of Management Science and Engineering (cross-appointed to Computer Science), Liam is one of seven doctoral students at Waterloo to receive this honour

Valued at $150,000 over three years, Vanier Scholarships recognize students who have not only demonstrated exceptional academic excellence and research potential, but also leadership skills. The scholarship is further augmented by Waterloo through the President’s Graduate Scholarship, which adds $10,000 a year over the duration of the Vanier Scholarship.

photo of Liam HebertLiam received the scholarship in part for his innovative research that applies a multimodal machine-learning approach to detect hate speech on social media platforms.

“Social media has transformed how we communicate and has created large communities online where people exchange ideas, “Liam says. “While this unprecedented scale of discourse has brought many benefits, it has also led to a rise in abusive behaviours and hate speech, affecting mental health, inciting violence and sowing division within society.”

Current artificial intelligence systems for classifying online text as hate speech often analyze individual comments in isolation. On social media platforms, however, text is often accompanied by images and other media, and the meaning of comments can change when interpreted within an entire conversation and an online community’s culture. Failing to consider these aspects may have crucial side effects. For example, certain words historically considered demeaning or abusive have been reclaimed by marginalized communities with a new, benign meaning. Context is critically important to understand hate speech. Without it, current methods can mistakenly flag words and images as hate speech, further perpetuating marginalization.

“My research aims to develop a community-centric, discussion-oriented way to detect hate speech,” Liam explains. “Central to my solution is a modification of graph transformers to interpret complex relationships in discussions, in essence capturing their context. Together with deep learning natural language models — fine-tuned to understand online vocabulary and integrated with computer vision models — this multifaceted approach will allow a holistic interpretation of multimedia discussions and community behaviours.” 

The heavy focus on context in his research drew inspiration from the relationship between atoms and molecules in chemical reactions, where no reaction happens in isolation.

“In online discourse, rather than atoms we have comments in a discussion, and the molecular bonds here are akin to what the comments are replying to,” Liam explained. “In my work, we’re treating discussions as molecular structures and building on a wealth of AI techniques that work in chemistry, such as graph transformers. This allows us to capture rich conversational semantics between comments, much like how atoms react with each other.”

Liam’s recent multimodal machine-learning research on Reddit, which involved training on 8,266 discussions with 18,359 labelled comments from 850 communities, has shown that his methods are more efficient, equitable, and accurate than competing models.

He is optimistic that his future research will continue to expand the scope of hate speech detection, promoting healthy, inclusive online discussions for greater social good. His goal is to develop innovative methods to stop abusive behaviour before it causes harm.

“Achieving this goal is in no small part thanks to the enthusiastic support of my advisors, Professors Cohen and Golab,” Liam said. “They are wonderful, supportive mentors who are strongly invested in applying artificial intelligence and data science for social good, and have encouraged me to do the same.”

Liam’s research on using multimodal machine learning to detect hate speech was featured recently in an article on Waterloo News titled, “AI saving humans from the emotional toll of monitoring hate speech.” Additionally, his research was also tied for first place at the 2023 Cheriton Research Symposium poster competition, an annual showcase of research excellence made possible by David Cheriton’s generous investment in computer science research and education at Waterloo.

Leadership

In addition to his groundbreaking research on hate speech detection, Liam has demonstrated steadfast dedication and leadership for societal good early in his career.

In 2018, he volunteered as a software developer for the Dalhousie Space Systems CubeSat project, aiming to build a small imaging satellite in partnership with the Canadian Space Agency to detect the effects of climate change. When the COVID-19 pandemic was declared, Liam pivoted the team’s efforts to create an emergency low-cost open-source medical ventilator to ensure life-saving respirators were available to vulnerable, underserved regions of Nova Scotia.

Liam has also volunteered with the Health Future Makers group at the non-profit Health Association Nova Scotia, working to modernize healthcare in the province using artificial intelligence and advocating for policies to enable these changes. The innovations he and his volunteer group pioneered improved patient and healthcare worker experiences in hospitals and long-term care facilities across the province. Notably, during the onset of the COVID-19 pandemic, he developed AI techniques to optimize the routing of personal protective equipment to long-term care homes in the province.

Liam has also been a mentor to incoming students at Dalhousie University as part of the Women in Computer Science Society, guiding them to access support services and hosting one-on-one sessions to address their stresses and aspirations. At Waterloo, he mentored six undergraduate research assistants to enrich their academic pursuits and instill in them the values of perseverance, critical thinking, and translating ideas into applied research.

  1. 2024 (68)
    1. July (11)
    2. June (11)
    3. May (15)
    4. April (9)
    5. March (13)
    6. February (1)
    7. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (51)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)