Projects

Current projects

Kinova Gen3 Robot Arm in Research Lab

We train robots to solve general tasks using only images. We present the robot with an image of its desired goal configuration, and it learns to reach that goal image using only images of the environment.

 Bearded man wearing a lower-body exoskeleton and crutches to walk

Wearable computer vision and deep learning are combined for real-time sensing and classification of human walking environments. Applications of this research include optimal path planning, obstacle avoidance, and environment-adaptive control of robotic exoskeletons. These powered biomechatronic devices provide assistance to individuals with mobility impairments resulting from ageing and/or physical disabilities.

Rendering of the Magnetic Levitation Floor

The goal of this project is to levitate a group of robots in 3D space using electromagnetic energy. MagLev (magnetically levitated) robots, providing frictionless motions and precise motion control, have promising potential applications in many fields. Controlling magnetic levitation systems is not an easy task; therefore, designing a robust controller is crucial for accurate manipulations in the 3D space and to allow the robots to reach any desired location smoothly.

This project focuses on deploying a set of autonomous robots to efficiently service tasks that arrive sequentially in an environment over time. Each task is serviced when the robot visits the corresponding task location. Robots can then redeploy while waiting for the next task to arrive. The objective is to redeploy the robots taking into account the expected response time to service tasks that will arrive in the future.

Completed projects

Picture of the TALOS robot

This project aims to detect and classify physical interaction between a human and the robot, and the human's intent behind this interaction. Examples of this include and accidental bump, helpful adjustments, or pushing the robot out of a dangerous situation.

By distinguishing between these cases, the robot can adjust its behaviour accordingly. The framework begins with collecting feedback from the robot's joint sensors, including position, speed, and torque. Additional features, such as model-based torque calculations and joint power, are then calculated.

Collaborative assembly using a Panda Powertool robot

This work aims to ease the implementation and reproducibility of human-robot collaborative assembly user studies. A software framework based on ROS is being implemented with four key modules: perception, decision, action and metrics.

Multicamera Cluster SLAM Visualization

This project brings together several novel components to help solve the problem of multi-camera SLAM with non-overlapping fields of view to generate relative pose estimation data. This includes the Multi-Camera Parallel Tracking and Mapping (MCPTAM) algorithm, as well as novel approaches to dealing with scale recovery and reducing degenerate motions for multi-camera SLAM.

Labeled Points in an Image Feed

Visual navigation algorithms pose many difficult challenges which must be overcome in order to achieve mass deployment. State-of-the-art methods are susceptible to error when measurements are noisy or corrupted, and prone to failure when the camera undergoes degenerate motions.

Example of Modified Motion Plan

While autonomous robots are finding increasingly widespread application, specifying robot tasks usually requires a high level of expertise.  In this work, the focus is on enabling a broader range of users to direct autonomous robots by designing human-robot interfaces that allow non-expert users to set up complex task specifications. To achieve this, we investigate how user preferences can be learned through human-robot interaction (HRI).

Projects by status

Most common topics