The project aims to study the role that human trust and expertise play in a setting where a robot is tasked at providing assistance to its human teammates in achieving their respective goals. During the interaction, the robot not only needs to decide if the human teammates require assistance but it is also important to know the form of assistance to provide and when to provide it.
This research explores: 1) the different possible ways of modelling human expertise and trust dynamics, 2) how these affect human behaviour while interacting with a robot, and 3) how the robot can measure and make use of these parameters to improve its action policy.
Collaborative assembly, kitting and sorting tasks are among the initial target applications.