Master’s Thesis Presentation • Human-Computer Interaction • TacTalk: Personalizing Haptics Through Conversation

Thursday, September 5, 2024 12:00 pm - 1:00 pm EDT (GMT -04:00)

Please note: This master’s thesis presentation will take place in DC 2310.

Anchit Mishra, Master’s candidate
David R. Cheriton School of Computer Science

Supervisors: Professors Oliver Schneider, Daniel Vogel

Haptic experiences are highly personal, with perception and preferences potentially varying drastically between different people. While different personalization interfaces have been proposed in the past, we do not have a detailed understanding of how people personalize a haptic experience. Through this work, we aim to explore the vocabularies and mental models that different users have for personalizing haptic experiences. To study this, we introduce a conversational system based on the GPT-4o Large Language Model (LLM), TacTalk, which enables users to describe and finetune virtual haptic experiences on the fly, without the need for navigating different settings or requiring expert interventions.

Inspired by the manner in which professional racing drivers communicate with their engineers to ensure their car is functioning properly, we present an application using TacTalk alongside a popular racing video game, Forza Horizon 5, along with evaluations of the system’s usability and parameter tuning consistency. We show that LLMs can be used to manipulate haptic feedback parameters in a predictable and valid manner. Additionally, we conduct a user study and synthesize key themes of user preferences towards customizing haptic feedback from interview and questionnaire responses. Responses to the NASA-TLX and SUS questionnaires show that users prefer a conversational mode of interaction over a visual interface offering the same capabilities. Additionally, we collect users’ responses to the Big Five Inventory and Player Traits questionnaires and probe their correlation with personalization preferences. Our qualitative findings show that when starting from scratch, users initially draw from real-world experiences and metaphors, and subsequently refine haptic parameters based on other metaphors or focus on specific aspects of the experience such as in-game events and the video game controller being used. An average conversation with TacTalk involves fewer than 5 back-and-forth interactions, with users preferring voice-based interaction over a visual interface with the same functionality. System architectures such as TacTalk, owing to the generality of LLMs and their ability to use additional context without requiring retraining, can be used in a plug-and-play manner with existing virtual experiences.