Alexandra
Vtyurina,
PhD
candidate
David
R.
Cheriton
School
of
Computer
Science
People with visual impairments often rely on screen readers when interacting with computer systems. Increasingly, these individuals also make extensive use of voice-based virtual assistants (VAs). We conducted a survey of 53 people who are legally blind to identify the strengths and weaknesses of both technologies, and the unmet opportunities at their intersection. We learned that virtual assistants are convenient and accessible, but lack the ability to deeply engage with content (e.g., read beyond the first few sentences of an article), and the ability to get a quick overview of the landscape (e.g., list alternative search results & suggestions). In contrast, screen readers allow for deep engagement with content (when content is accessible), and provide fine-grained navigation & control, but at the cost of reduced walk-up-and-use convenience.
Based on these findings, we implemented VERSE (Voice Exploration, Retrieval, and SEarch), a prototype that extends a VA with screen-reader-inspired capabilities, and allows other devices (e.g., smartwatches) to serve as optional input accelerators. In a usability study with 12 blind screen reader users we found that VERSE meaningfully extended VA functionality.
This work was published at ASSETS 2019.