Purchase this article with an account.
Alicia Weisener, Roger Johansson; How optimal strategies evolve in memory-guided visual search: evidence from eye movement patterns.. Journal of Vision 2018;18(10):649. doi: 10.1167/18.10.649.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In everyday life, we search for visual objects in complex environments and often encounter a similar search problem over multiple times. Previous research has shown that repeated exposure to the same scene increases search efficiency, yet little is known about the role of eye-movements underlying memory-guided search. In the present study, we investigated how eye-movement patterns evolve over repeated viewings with the aim of understanding how different components of such scanpaths develop and influence search performance. Eye movements were recorded from 25 participants who had to find a hidden target in real-world scene images. Eight scenes were repeated throughout the task and randomly intermixed with eight novel scenes over six repetition blocks. The location of the target in each repeated scene was fixed. As expected, search efficiency systematically increased with repeated viewings and with a corresponding reduction in number of fixations. Critically, participants' eye-movement patterns were compared using the MultiMatch method (Dewhurst et al., 2012), which quantifies scanpath similarity over five different dimensions: Shape, Length, Direction, Position and Duration. Results revealed that similarity in shape, length and position systematically increased over repetitions, and scanpath similarity was, for all those dimensions, higher within-participants (i.e. comparing the same participant over repetitions of the same scene image) than between-participants (i.e. comparing different participants over repetitions of the same scene image). By relating those similarity measures with task performance, the present study demonstrates how different components of a scanpath adapt over repetitions in a memory-guided search task and specifies how those components contribute to efficient search strategies. Additionally, the findings suggest that individual top-down strategies are more important than bottom-up features provided by contextual features from the scene images. Taken together, these results shed new light on how different scanpath components unfold with experience and thereby contribute to developing a more optimal search strategy.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only