This work aims to ease the implementation and reproducibility of human-robot collaborative assembly user studies. A software framework based on ROS is being implemented with four key modules: perception, decision, action and metrics.
This project aims to detect and classify physical interaction between a human and the robot, and the human's intent behind this interaction. Examples of this include and accidental bump, helpful adjustments, or pushing the robot out of a dangerous situation.
By distinguishing between these cases, the robot can adjust its behaviour accordingly. The framework begins with collecting feedback from the robot's joint sensors, including position, speed, and torque. Additional features, such as model-based torque calculations and joint power, are then calculated.
While autonomous robots are finding increasingly widespread application, specifying robot tasks usually requires a high level of expertise. In this work, the focus is on enabling a broader range of users to direct autonomous robots by designing human-robot interfaces that allow non-expert users to set up complex task specifications. To achieve this, we investigate how user preferences can be learned through human-robot interaction (HRI).