Speaker: Bowen Hui, University of Toronto
Due to its increasing complexity, there is a need to adapt software in order to maximize end-user experience. In this talk, I will describe my PhD work on developing and learning a user model in the domain of intelligent assistance. The problem of intelligent assistance is cast as a decision-theoretic planning problem, so the intelligent agent's reasoning process is formalized as a partially observable Markov decision process (POMDP). In contrast to other user modelling approaches, the first part of my work focuses on modelling "user features" -- such as frustration, independence, and mental model state -- which play a role in defining interaction preferences. To illustrate, I will devote the first half of the talk to the development of a probabilistic mental model used to estimate the disruption induced by adaptive systems. Results show that this approach is competitive with alternative adaptive systems w.r.t. task performance, while providing the ability to reduce disruption and adapt to user preferences.
The second part of my work focuses on developing the POMDP reward model, where the agent's reward is modeled as the user's utility of the interaction. Since different people have varying interaction preferences, there is a need to learn user-specific utility functions that reflect their subjective preferences. As such, I will devote the second half of the talk to the development of an experiential procedure that makes use of incremental preference elicitation techniques. Results indicate that an experiential approach helps people understand stochastic outcomes as well as better appreciate the sequential utility of intelligent assistance. Overall, my work makes use of modelling techniques from artificial intelligence and empirical methodology from human-computer interaction.
Friday, February 6, 2009 11:30 am
-
11:30 am
EST (GMT -05:00)