Candidate: Vladimir Joukov
Title: Fast approximate multi-output Gaussian processes for human objective function learning and control
Date: March 1, 2021
Time: 4:00 PM
Place: REMOTE ATTENDANCE
Supervisor(s): Kulic, Dana (Adjunct) - Melek, William W. (Mechanical & Mechatronics Engineering)
Teaching robots by demonstration offers a promising way to eliminate the time intensive, tedious, manual programming required for even simple movement tasks and would allow robots to learn motions from humans who are expert at a specific task without requiring them to have robotics knowledge, which could lead to better, human like motion. Gaussian processes (GP) regression models are an appealing machine learning method to capture human movement as they learn expressive non-linear models from exemplar data with minimal parameter tuning and estimate both the mean and covariance of expert motions. However, both training and regression experience exponential computational complexity growth with respect the number of training samples. This has been a long-standing challenge and prevented GPs to be widely utilized in real time control applications.
Approximating a GP using eigenvalues and functions leads to significant reduction in training and regression complexity. The training computational complexity now grows linearly with respect to the number of training points while regression is only dependent on the chosen number of eigenvalues. Furthermore, in a special case, the training is completely independent form the number of samples. The proposed method can regress over multiple outputs, learn the correlations between them, and estimate the derivative of the regressor of any order. Next, the approximate GP is used to learn a time varying cost function from demonstrations. The learned cost function is then optimised in a model predictive control framework to reproduce the task on a robot while satisfying constraints and minimizing joint torques. The proposed approach accurately encodes variability of the expert motion as well as correlations in task or joint space. This allows the robot to understand what parts of the task to focus on and react in a human like way to disturbances. The proposed approach is compared in simulation to two other popular trajectory distribution modeling methods and is extensively tested on the Franka Emika robot showing its ability to handle disturbances, constraints, obstacles, and reproduce human demonstrated motions.