Grad Seminar: Multi-resolution Display Utilization in HRI Studies: Observations and Trends
Abstract
Human robot interaction (HRI) studies are costly and usually need to be conducted meticulously inperson. Additionally, due to the amount of work and organization needed to prepare for these inperson studies, the number of participants involved is usually limited. It would be beneficial to the field of robotics as a whole to be able to conduct HRI studies with larger numbers of participants while recording their attentional metrics including cursor movements. Cursor movements allow for determination of participants’ temporal attentional locations in the social interaction setting which could facilitate the determination of participant engagement dynamics with these scenarios and also the effectiveness of robot behaviour in these social settings. To this end, the use of online studies (e.g. Amazon Mechanical Turk (MTurk)) is beneficial since larger participant numbers could be achieved. However, what could have been recorded in terms of attentional metrics in relation to videos using platforms such as MTurk till now has been limited in scope (e.g. questionnaires after watching a series of videos). To deliver on this objective, we have designed a Human Computer Interaction (HCI) multi-resolution display platform called FocalVid which records the participants’ cursor movements while interacting with videos in the context of online MTurk formatted studies. Multi-resolution display research has been ongoing for the past few decades resulting in a number of platforms suitable for different needs. After extensive user validation of our platform through an HCI study, indicating that cursor movements gathered through FocalVid have notable similarities with human gaze behaviour, we utilized this platform for an HRI study, focusing on the effects of robot social role and personality on participants’ visual attentional metrics in the context of observing recorded HRI interaction videos. An initial study included the assessment of participants’ perception of the robot’s personality dimensions while watching a series of unobstructed HRI videos, using Linear Mixed Model (LMM) statistical testing. The validation study confirmed our robot personality manipulations and indicated high perceiver effects between participant self-assessment and their assessment of robot personality dimensions. Our final study using the FocalVid platform on the same videos as the initial study indicated that the presence of the FocalVid platform did not skew the participants’ perception of the robot's personality. Through MANOVA analysis of the cursor data, it was also determined that the presence of both the robot’s social role and personality affected participants’ overall visual attention toward the robot. Through ANOVA testing of cursor data, it was determined that the robot’s personality affected the participants’ attention towards communicative elements present in the social interaction (i.e. robot pointing hand while communicating using gesture).
Presenter
Sahand Shaghaghi, PhD candidate in Systems Design Engineering
Join in-person or online via Zoom