University of Waterloo
Engineering 5 (E5), 6th Floor
Phone: 519-888-4567 ext.32600
Design team members: Stuart Doherty, Luke Windicsh
Supervisor: Paul Fieguth
Motivation and background
The world of amateur stargazing can be a challenging one. For those with relatively little experience, initial interest and curiosity can quickly dissolve into frustration and annoyance: the distinction between a visible planet and a particularly bright star, or between an arbitrary cluster of stars and an important constellation, can be impossibly subtle to an inexperienced eye. For more experienced gazers, moving beyond the realm of casual observation to develop a deeper understanding of the heavens often requires the use of star charts and paper references, which can be clumsy and frustrating in their own right. Technological advancement and further exploration of the heavens have long been closely coupled activities; the enhanced reality project will use technology to address the difficulties faced by amateur astronomers.
The ultimate goal of the enhanced reality astronomy project then, is to make stargazing an accessible, enjoyable and interactive experience for amateur observers. Astronomy has been part of the human species’ shared culture and advancement for thousands of years, and will continue to be vitally important as technology expands the species’ boundaries beyond planetary confines. By highlighting interesting celestial information within a user’s field of view, and even by guiding the user’s gaze to other fascinating objects outside of his or her current field of view, the enhanced reality astronomy project will enhance the reality of stargazing, helping develop a richer appreciation and understanding of the fascinating, celestial frontier.
The completed enhanced reality astronomy project will be implemented as a wearable device, and will comprise three basic functional units: the input unit, the processing unit, and the display unit. The input unit will consist of a head-mounted camera and three-axis encoder. Visual information from the camera (the field of view of the user) will be supplemented with head movement and orientation information from the encoder. This information will then be passed on to the processing unit contained within a laptop computer. The input information will be used to search a celestial database, identifying where in the sky the user is looking, and returning relevant celestial information. Finally, celestial information returned from the processing unit and of interest to the user will be displayed to the user via the display unit. The display unit will consist of a translucent monocular screen that will overlay the user’s field of view with relevant astronomical information. For example, the lines of a particular constellation may be drawn between the constituent stars, or a particular planet may be circled and flagged.
In order to achieve the project goals, there are numerous design and research efforts that need to come together to form the enhanced reality system. Many of these efforts are independent in nature, impacting one another minimally until they are integrated into the final system. This independence suggests that a parallel process design methodology is well suited to this project, providing numerous avenues for research and design; as delays occur with one activity, or priorities change, attention can be quickly switched to other activities without sacrificing overall project progress.
The parallel design methodology employed is comprised of two main branches: software/algorithms, and hardware. Both of these branches have clearly defined and separate objectives, and activities under each branch are themselves highly modular in nature. Thus, development of the branches and their associated modules adapts well to a parallel design methodology. In fact, the only activity of the design process that must occur sequentially is the final integration of the various hardware components with the software algorithms, which cannot happen until all software and hardware design effort is complete: software to work with the output of the hardware cannot be created until the form of the hardware output is known.