Many applications proposed for multi-robot teams require them to collect vast amounts of sensor data and convert that data into meaningful knowledge about their operating environment to enable timely decisions, either by the robots or by end users.
The focus of this research is to enable teams of autonomous vehicles to collect large-scale and multi-resolution information about dynamic and uncertain environments.
Most 3D mapping methods do not properly account for moving objects. These objects can be humans, vehicles, animals, other robots, and even swaying foliage, all of which currently lead to unwanted artifacts in maps. Object location, velocity, and heading estimates will all be required in real-time for conflict resolution and will require significant advances over currently available detection methods. The aerial, ground and humanoid robots selected for the RoboHub all include the correct equipment to exploit these advances in robot localization, making them more capable of autonomous operation regardless of their surroundings.
Within this research theme, we will focus primarily on three topics:
- Observers for systems on Lie groups, building on Professor Nielsen and Professor Kulić's work on Lie groups;
- Simultaneous localization and mapping (SLAM), building on Professor Waslander and Professor Melek's work on the application and development of state-of-the-art SLAM algorithms; and
- Perception in non-static environments, building on Professor Waslander's work on multi-camera cluster localization and mapping.