Anastasia
Kuzminykh,
PhD
candidate
David
R.
Cheriton
School
of
Computer
Science
Video-mediated communication has long struggled with asymmetrical constraints on situational awareness, especially in hybrid work meetings between collocated and remote participants. Advances in computer vision offer exciting opportunities to augment mediated situational awareness, but we must first understand what is meaningful to capture and present.
In this talk, I will present a functional-structural model of attention for video-mediated meetings that I developed during my internship with MSR Cambridge. The model was informed by a study of sense-making and selectiveness of attention in hybrid work meetings. I will talk about the rationale and dimensions of the model and their empirical support and discuss the model in relation to other conceptual frameworks. I will conclude by discussing our model’s design implications. The functional dimension represents what a potential technological feature might support, and the structural dimension outlines the architecture of feature implementation. By focusing on these aspects, I will show how we can decompose purposeful attention into features conducive to machine analysis.