Master’s Thesis Presentation: Novel Directions for Multiagent Trust Modeling in Online Social Networks

Friday, May 1, 2020 1:30 pm - 1:30 pm EDT (GMT -04:00)

Alexandre Parmentier, Master’s candidate
David R. Cheriton School of Computer Science

This thesis presents two works with the shared goal of improving the capacity of multiagent trust modeling to be applied to social networks. 

The first demonstrates how analyzing the responses to content on a discussion forum can be used to detect certain types of undesirable behaviour. This technique can be used to extract quantified representations of the impact agents are having on the community, a critical component for trust modeling.

The second work expands on the technique of multi-faceted trust modeling, determining whether a clustering step designed to group agents by similarity can improve the performance of trust link predictors. Specifically, we hypothesize that learning a distinct model for each cluster of similar users will result in more personalized, and therefore more accurate, predictions.

Online social networks have exploded in popularity over the course of the last decade, becoming a central source of information and entertainment for millions of users. This radical democratization of the flow of information, while purporting many benefits, also raises a raft of new issues. These networks have proven to be a potent medium for the spread of misinformation and rumors, may contribute to the radicalization of communities, and are vulnerable to deliberate manipulation by bad actors.

In this thesis, our primary aim is to examine content recommendation on social media through the lens of trust modeling. The central supposition along this path is that the behaviors of content creators and the consumers of their content can be fit into the trust modeling framework, supporting recommendations of content from creators who not only are popular, but have the support of trustworthy users and are trustworthy themselves. This research direction shows promise for tackling many of the issues we’ve mentioned.

Our works show that a machine learning model can predict certain types of anti-social behaviour in a discussion starting comment solely on the basis of analyzing replies to that comment with accuracy in the range of 70% to 80%. Further, we show that a clustering-based approach to personalization for multi-faceted trust models can increase accuracy on a down-stream trust aware item recommendation task, evaluated on a large data set of Yelp users.