Research

Our research develops computational models of visual and motor areas of the brain, to test and improve our understanding of how these systems function.

We embed these models in robots to ensure that they confront the practical complications (e.g. nonlinear physics; specular surfaces; etc.) that humans face while seeing and moving.

SeeingPlanning | LookingGrasping| Adaptive Control | Spatial Cognition

Deep Networks and the Brain

surrounds in MT layer of convolutional model


Deep convolutional networks were originally inspired by the primate visual system. However, a great deal has been discovered about the visual system since that time. We are incorporating some of this information to develop deep networks that are somewhat more brain-like, and working to understand which of the differences between deep networks and the brain have the greatest functional significance. Our work in this area includes constraining convolutional network architectures with primate tract tracing data, experimenting with correlated Poisson variability as a regularizer, forcing deep layers to take on representations that reflect electrophysiology data, and integrating feedback mechanisms related to Gestalt laws. The goal is to develop a deep network with primate-like representation and perceptual abilities.

Planning

action potentials from planning model


Moment-to-moment movement decisions are influenced by environmental affordances and habits, and also by a rough plan of action that we are constantly forming and updating. We developed the first physiologically plausible model of this rapid planning process.

OREO: An Open-Hardware Robotic Head

OREO robot


We have developed a robot that can perform eye movements similar to those of humans, including saccades, vergence, and rapid changes of focus distance. The stereo baseline is also similar to that of humans. This robot will allow us to address a major difference between deep vision networks and the visual cortex. Specifically, in contrast with deep networks, humans have rich visual perception only at the centre of the visual field, and we move our eyes rapidly, several times per second, to focus on the parts of a scene that are most important to whatever we are doing. This is essential in human vision, because processing the whole scene in detail would require a brain that was several times larger. Deep neural networks require much more energy and physical space than biological neural networks with similar processing power, so this approach will probably be increasingly important for robots as their visual capabilities expand. We have released the design files for the robot under an open-hardware license. Future work includes refining the robot’s controllers, and integrating foveated deep networks. 

Teaching Robots how to Grasp

hand-held gripper


A moment before you pick something up, networks in your parietal and frontal cortex convert information from your eyes into control signals that shape and orient your hand appropriately. We are developing models of these networks for control of robotic grasping. To train these networks, we have developed a unique approach to rapid human grasp demonstration. Specifically, one of us holds a robotic gripper and controls it with a joystick, while cameras take images of the object and a motion tracker senses the position and orientation of the gripper.

Adaptive Neural Control

JACO with hammer


We have written a robotic control library based around force control. Force control allows for safer, compliant control of robotic arms, but requires a very accurate model of the robot’s dynamics. To account for model imperfections and unexpected forces (e.g. encountered when picking up a heavy object), we have developed an adaptive neural controller based on the functioning of the cerebellum in the human brain. The adaptive controller is implemented using the Nengo neural modeling software Python API, and gives robots the ability to work with unknown and unmodelled tools, work safely alongside people, and prolong robot lifetime by accounting for wear and tear that occurs with extended use. Using Nengo, we can run our control algorithms on CPU’s, GPU’s, FPGA’s, and specialized neuromorphic hardware which operates at a greatly reduced power cost. In addition to the ABR control library, we have written the ABR Jaco2 library for interfacing our control algorithms with the 6DOF Kinova Jaco2 arm. Both libraries are available on github and are free for non-commercial use; Library one and Library two.

Spatial Cognition

spatial cognition model in Nengo simulator


An ongoing project is a functional neural model of spatial cognition. The goal is to have a system that can form an internal representation of its environment (akin to a mental map) and use this representation to perform spatial tasks such as navigation. The design is inspired by the hippocampus and entorhinal cortex of the brain and will utilize key spatially selective cell types such as head direction cells, place cells, grid cells, boundary cells, and object-vector cells. These cells are driven by sensory input and self motion cues and form a basis for more complex representations. Associations between objects and locations are learned using a combination of the Prescribed Error Sensitivity and Vector-Oja biologically plausible learning rules. Future work includes developing reinforcement learning algorithms that can leverage this biologically inspired representation of space to form goal directed behaviour, as well as running the system on a physical robotic platform.