Searching through a series of environments is a pervasive everyday experience and although visual search and sequence learning are long researched fields, little is known about people’s ability to learn a visual search sequence. Environments that require more visual search may disrupt sequence learning, for a multitude of reasons. They require more effort, time, and, if people were learning a sequence of eye movements, they would increase the noise to signal ratio. However, some visual search environments may permit sequence learning. In particular, when people search a familiar context of distractors, they can more quickly find the target than when searching a novel context. This dissertation investigated the impact of visual search demands (i.e. popout vs non-popout targets) and distractor environment (i.e. static vs consistently changing contexts vs random distractors) on sequence learning. The results show that sequence learning occurred in all conditions, suggesting that random noise in the environment and the need to perform visual search does not interfere with sequence learning. This finding has implications for understanding the mechanisms of sequence learning as well as implications for the everyday world. When people interact with user interfaces, they often engage in the same sequence of actions. These findings show that sequence learning occurs in a variety of cases and suggest that when user interface updates and items are no longer where and when users expect them, people will likely struggle to complete their tasks.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.