We train robots to solve general tasks using only images. We present the robot with an image of its desired goal configuration, and it learns to reach that goal image using only images of the environment.
This project focuses on deploying a set of autonomous robots to efficiently service tasks that arrive sequentially in an environment over time. Each task is serviced when the robot visits the corresponding task location. Robots can then redeploy while waiting for the next task to arrive. The objective is to redeploy the robots taking into account the expected response time to service tasks that will arrive in the future.
While autonomous robots are finding increasingly widespread application, specifying robot tasks usually requires a high level of expertise. In this work, the focus is on enabling a broader range of users to direct autonomous robots by designing human-robot interfaces that allow non-expert users to set up complex task specifications. To achieve this, we investigate how user preferences can be learned through human-robot interaction (HRI).