Mapping & Localization without LIDAR A robust camera SLAM solution

Background

Visual odometry is the ability to localize with only a camera while moving.   Simultaneous Localization and Mapping (SLAM) extends this to also mapping the world simultaneously. The result is a mapping and localization method that produces LIDAR like results with only an inexpensive camera. Direct methods use pixel values while indirect methods use detected visual features. Hybrid methods leverage the advantages of Direct and Indirect Video Odometry. However, hybrid methods in general require complicated systems with a relatively high computational requirements limiting their real-time performance. These methods also cannot perform well in texture-deprived environments, may not provide a consistent performance, and may require a separate process to build and maintain several different map representations.

Description of the invention

Researchers at the University of Waterloo have developed a hybrid Visual SLAM method that tightly integrates existing direct and indirect methods to capitalize on each one’s advantages to address the above-mentioned limitations. Waterloo’s Visual SLAM provides direct advantages such as sub-pixel accuracy, inexpensive computational costs, robust under texture deprivation and provides a sparse and semi-dense reconstruction density.  Further the method  also provides indirect advantages such as being robust to large distances between views, less sensitive to optimization seeds, and robust to lighting changes. These combined advantages provide a global map query capability that supports map re-use. The method provides  state-of-the-art low error performance: in visual odometry, this Visual SLAM method outperforms all hybrid, direct & indirect methods’ while in full SLAM after loop closure, the accumulated error is essentially zero.   

Advantages

  1. Robust (to large erratic motions) and reliable (in texture-deprived environment)
  2. Generates a global and a local map
  3. The Global map is reusable and allows for real-time photometric calibration and loop closure
  4. Item 3 results in superior performance in robustness and accuracy
  5. Computationally efficient (can run at 50 frames per second on a CPU - no special hardware needed)
  6. Agnostic to type of sensor used such as Camera, IMU (i.e. accelerometer), GPS, LIDAR, Radar

Potential applications

  • Autonomous vehicles (including robots, under water vehicles)
  • AR (Augmented Reality)
  • UAV (Unmanned Aerial Vehicles)
  • HD maps
  • Medical Robotics (minimally invasive surgery)
  • Environmental Monitoring
3D recovered map

3D recovered map

B- the projected depth map of all active points; C- the occupancy grid used to ensure a homogeneously distributed map point sampling process; D- the inlier geometric features

B - the projected depth map of all active points C - the occupancy grid used to ensure a homogeneously distributed map point sampling process D - the inlier geometric features

Accessible PDF

Reference

10191

Patent status

Patent pending

Stage of development

Software prototype

Ongoing research

Contact

Scott Inwood

Director of Commercialization

Waterloo Commercialization Office

519-888-4567, ext. 43728

sinwood@uwaterloo.ca

uwaterloo.ca/research