Can Robots Be Bullied? A Crowdsourced Feasibility Study for Using Social Robots in Anti-Bullying Interventions

Title Can Robots Be Bullied? A Crowdsourced Feasibility Study for Using Social Robots in Anti-Bullying Interventions
Author
Abstract

Bullying in schools is a serious issue with severe and long-term consequences. We explore using social robots in anti-bullying programs to encourage children to intervene in bullying of their peers. To that end, we have conducted a crowdsourced study to explore the feasibility of using robots in the context of bullying (i.e., to investigate whether robots are perceived as entities that can be bullied). We present qualitative and quantitative results from a between-subjects video study, comparing robot bullying (robots being bullied) to human bullying (humans being bullied). Our findings suggest that while the majority of participants describe both instances with connotations of wrongness and immorality, they use different cognitive mechanisms for moral disengagement with robot bullying vs human bullying. We also found significant differences in participants’ perceptions of each scenario, including associating robot mistreatment with bullying less strongly, and being less willing to intervene in it. This work contributes insights for understanding how people perceive bullying of robots, designing intelligent behaviors to discourage bullying of robots, and to our long-term goal of developing anti-bullying pedagogical programs that use social robots.

Year of Publication
2021
Conference Name
IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)
Date Published
Aug
DOI
10.1109/RO-MAN50785.2021.9515450
Download citation