Please note: This master’s thesis presentation will be given in person in DC 3317 and online.
Graeme Zinck, Master’s candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Daniel Vogel
In voice-based interfaces, non-verbal features represent a simple and underutilized design space for hands-free, language-agnostic interactions. This work evaluates the performance of three fundamental types of voice-based musical interactions: pitch, interval, and melody. These interactions involve singing or humming a sequence of one or more notes.
A 21-person study evaluates the feasibility and enjoyability of these interactions. The top performing participants were able to perform all interactions reasonably quickly (<5s) with average error rates between 1.3% and 8.6% after training. Others improved with training but still had error rates as high as 46% for pitch and melody interactions. The majority of participants found all tasks enjoyable. Using these results, we propose design considerations for using singing interactions as well as potential use cases for both standard computers and augmented reality glasses.