Micro adventure robots cooperative multi-agent system

Design team members: David Ruzyski

Supervisor: Dr. K. Kamel

Background

The Pattern Analysis and Machine Intelligence (PAMI) Agents Group at the University of Waterloo has been researching cooperative behavior of autonomous agents for a considerable amount of time. For the most part, this work has been performed in simulations because the group has not had access to a physical platform on which to perform their research. However in the summer of 1999, the Systems Design department acquired a set of six small "soccer playing" robots from a company in Asia called Micro Adventure.

Shaped like cubes approximately 7.5 cm in length per side, each robot has two wheels connected to independent motors, in addition to a small amount of communication hardware to allow control of the robot to be performed remotely via a video camera counted directly above it facing downward onto the playing field. The video camera is attached to a computer that through a hardware image capture card performs image processing on individual video frames to determine the location of the robots and any other items on the field Ii.e. a golf ball which acts as a soccer ball for the robots to manipulate). Positional information from the image processor is then used to determine a course of action for each individual robot. Instructions can be issued through a simple transmitter attached to the computer's serial port to communicate with the actuators in the robots.

Project description

Control of the robots is performed through a pre-existing software running on personal computers. In addition to providing the basic intelligent agent algorithm designed to coordinate the activities of the robots in playing a game of soccer, the software is also responsible for image capture and processing, as well as providing the communication link to the robots themselves via the computer's serial ports.

The objective of this design project is to redesign, upgrade and improve the existing software architecture for the PAMI robots in a number of key areas. The purpose is to develop the robots into a viable physical platform for researching multi-agent systems, which they are currently not. The overall effort can be summarized into two core areas:

Image processing and recognition improvements: the original image processing algorithm is inefficient, and the recognition of objects within the image often provides erroneous results. Robots may not be identified, or potentially identified incorrectly. Without rectifying this to provide accurate positional date, it is impossible to expect agents to exhibit good behaviour and performance.

Design of monolithic architecture: the robot kit came with a single pre-existing application. This application contains all aspects of the robot software system, from image processing/recognition, to robot communications, to the actual agent behaviour algorithms themselves. It is not extensible in any way, and there is no support researchers to plug in and test their agents using the software. Thus in order to make this platform a useful tool, the robot source code must be rearchitected to allow researchers an easy, extensible method for integrating their agents into the system.

Design methodology

Improvements to the accuracy of the image recognition algorithm will come from two strategies. First is to divide the colour spectrum up into discrete regions which are maximally different from one another in colour space. Since the robots are identified by two colour swatches on their tops (one for team, one for player on team), it is hoped that ensuring each robot has a colour scheme that is maximally different from any other robot, misidentification of robots will be minimized. Work in this regard has already begun, but there appears to be room for improvement. Second deals with more robust handling of robots in close proximity. Often robots which are near one another will have their colour swatches misinterpreted as belonging to the other robot. A simple strategy for handling this deficiency involves more intelligent matching of a colour swatch with its counterpart. Currently, this is done rather blindly, which is resulting in the erroneous results.

With respect to image processing efficiency, coinciding with an overall examination of the algorithm for coding inefficiencies, principal component analysis will be performed on the video frames to determine whether the working set the recognition code is working on can be reduced. For example, instead of looking at the entire YUV triplet for each pixel, can the robots be recognized accurately using only two of the components. This shows especial potential if we take the step outlined previously to maximize the differences between colours in colour space.

Lastly in terms of the architecture redesign, the current monolithic C application will be broken down into several smaller, modular C++ applications. One will be responsible for image processing, one for agent behaviour, and another for robot communications. Communication between the separate applications will be achieved via TCP/IP connections, to allow a more distributed architecture and also permit easier integration of components written on Unix systems, or in languages other than C++ such as Java.