Skip to main content
eScholarship
Open Access Publications from the University of California

Visual saliency predicts gaze during real-world driving task

Abstract

Models of bottom-up visual attention such as the "saliency map" predict overt gaze under laboratory conditions while subjects view static images or videos while seated. Here, we show that the saliency map model predicts gaze at similar rates even when applied to video from a head-camera as part of a wearable eye-tracking system (Tobii Pro Glasses 2) while subjects drive an automobile or are passively driven while sitting in the front passenger-side seat. The ability of saliency to predict gaze varies depending on the driving task (saliency better predicts passenger gaze) and external conditions (saliency better predicts gaze at night). We further demonstrate that predictive performance is improved when the head-camera video is transformed to retinal coordinates before feeding it to the saliency model.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View