Current projects

Picture of the TALOS robot

This project aims to detect and classify physical interaction between a human and the robot, and the human's intent behind this interaction. Examples of this include and accidental bump, helpful adjustments, or pushing the robot out of a dangerous situation.

By distinguishing between these cases, the robot can adjust its behaviour accordingly. The framework begins with collecting feedback from the robot's joint sensors, including position, speed, and torque. Additional features, such as model-based torque calculations and joint power, are then calculated.

Collaborative assembly using a Panda Powertool robot

This work aims to ease the implementation and reproducibility of human-robot collaborative assembly user studies. A software framework based on ROS is being implemented with four key modules: perception, decision, action and metrics.

Rendering of the Magnetic Levitation Floor

The goal of this project is to levitate a group of robots in 3D space using electromagnetic energy. MagLev (magnetically levitated) robots, providing frictionless motions and precise motion control, have promising potential applications in many fields. Controlling magnetic levitation systems is not an easy task; therefore, designing a robust controller is crucial for accurate manipulations in the 3D space and to allow the robots to reach any desired location smoothly.

This project focuses on deploying a set of autonomous robots to efficiently service tasks that arrive sequentially in an environment over time. Each task is serviced when the robot visits the corresponding task location. Robots can then redeploy while waiting for the next task to arrive. The objective is to redeploy the robots taking into account the expected response time to service tasks that will arrive in the future.

Multicamera Cluster SLAM Visualization

This project brings together several novel components to help solve the problem of multi-camera SLAM with non-overlapping fields of view to generate relative pose estimation data. This includes the Multi-Camera Parallel Tracking and Mapping (MCPTAM) algorithm, as well as novel approaches to dealing with scale recovery and reducing degenerate motions for multi-camera SLAM.