University of Waterloo
Engineering 5 (E5), 6th Floor
Phone: 519-888-4567 ext.32600
Design team members: Nadim Jamal, Rishi Jobanputra
Supervisor:Dr. Hamid Tizhoosh
Many realistic situations occur that would benefit from higher quality filming for moving targets (i.e. people). In most filming situations, an individual is required to manually adjust the video camera to follow the moving person. Having this individual film the moving person requires extra effort and is subject to imprecision due to human error. Therefore, the need arises to have an automated system that is capable of filming a moving target.
A system that automatically tracks moving targets with a video camera can be used for applications such as: videoconferencing, e-learning, videotaping lectures, and many other similar situations.
There are some automated tracking systems available in the market; however, most of these are used for the sole purpose of videoconferencing. Therefore, the designs of these systems are limited to videoconferencing applications. As well, most of these systems cost tens of thousands of dollars.
It is our goal to design and implement a real-time automated panning camera (APC) system that is capable of tracking a moving target. A high-level design overview can be found below in Figure 1.
Figure 1: High-level design overview
As illustrated in the High Level diagram, the target itself could be a moving person. What distinguishes the individual from its environment is the active infrared (IR) emitter that will be worn.
To determine the position of the individual, a series of IR sensors (which make up the sensing unit) are needed to detect the IR light emitted. The sensor inputs are sent to the computer (PC) via the data acquisition (DAQ) system. The PC will contain the necessary algorithms to accurately determine the location of the target based on the sensor inputs.
Once the location of the individual has been determined, a control signal will be sent to the stepper motor via the DAQ system. The stepper motor is responsible for panning the camera to location of the individual.
The APC system was solved through a decomposition methodology; the overall system is separated into subsystems, which will be solved concurrently. Once each subsystem has been solved, they will be integrated to form the final system. The two major subsystems identified are sensing and actuating. Sensing involves determining the spatial location of the target, whereas actuation involves moving the camera to track the target.
Given the IR emitter and sensor specifications, it is necessary to develop a scheme to locate the IR emitter. Determining the point light source can be done through a triangulation approach. Triangulation location-sensing methods use the geometric properties of triangles to determine an object’s location. As illustrated in Figure 2, this approach requires two reference points (i.e. IR sensors) with a known distance, ds, apart. Each reference point can determine its distance from the target (i.e. d1, d2), but has no knowledge of the targets angular position. Through the Law of Cosines, it is possible to determine the angular position of the target to each sensor. This result can then be used to determine the targets angular position with respect to the camera.
Figure 2: Triangulation
Once the angular position of the target has been determined, we must pan the camera towards it. This is done by sending a control signal from the computer to the stepper motor via the DAQ system. The stepper motor, which will be interfaced with the camera mount, will rotate the camera towards the position of the target.
Integrating the two subsystems should be a fairly straightforward process. The DAQ and the PC act as the link between the two subsystems. The DAQ receives measurements from the sensors and outputs control signals to the stepper motor, while the PC runs the sensing and control algorithms. In a sense, the APC system can be considered a closed-loop control system.