VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time

Abstract

Detecting emotions while driving remains a challenge in HumanComputer Interaction. Current methods to estimate the driver’s experienced emotions use physiological sensing (e.g., skinconductance, electroencephalography), speech, or facial expressions. However, drivers need to use wearable devices, perform explicit voice interaction, or require robust facial expressiveness. We present VEmotion (Virtual Emotion Sensor), a novel method to predict driver emotions in an unobtrusive way using contextual smartphone data. VEmotion analyzes information including traffic dynamics, environmental factors, in-vehicle context, and road characteristics to implicitly classify driver emotions. We demonstrate the applicability in a real-world driving study (N = 12) to evaluate UIST ’21, October 10–14, 2021, Virtual Event, USA Bethge et al. the emotion prediction performance. Our results show that VEmotion outperforms facial expressions by 29% in a person-dependent classification and by 8.5% in a person-independent classification. We discuss how VEmotion enables empathic car interfaces to sense the driver’s emotions and will providein-situ interface adaptations on-the-go.

Publication
In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology