Purchase this article with an account.
Jason Droll, Miguel Eckstein; Expected object position of two hundred fifty observers predicts first fixations of seventy seven separate observers during search. Journal of Vision 2008;8(6):320. doi: https://doi.org/10.1167/8.6.320.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Saccadic eye movements can be directed using ultra-rapid extraction of low-level information in images of natural scenes (Kirchner & Thorpe, 2005), and are also biased towards the expected location of objects (Torralba et al., 2006), allowing improved search performance in statistically structured scenes (Eckstein et al, 2006). Expected object locations can be objectively defined by the statistical properties of low level features across the scene (Torralba et al., 2006) but these methods have not captured more complex relationships such as the relative configuration of objects. To bypass this problem, we quantified expected object locations in scenes by asking two hundred fifty observers to report the position of where they would most expect a particular object to be located within each of twenty-four real world scenes (e.g. cup in kitchen). These distributions included a wide range of variances (1.36 – 7.92 deg) and multiple foci (1–3). We compared these distributions to first fixations of seventy seven different observers in two separate tasks: (1) detection, reporting the presence or absence of an object (N=48) and (2) localization, reporting either the position, or expected location of an object (N=29). In target-absent trials for each task, endpoints for first fixations were significantly closer to the average expected position than to an equidistant control location (detect: 6.67 deg vs 11.93 deg; localize: 6.39 deg vs. 11.62 deg). Expectation of object location also exerted influence on trials in which the target appeared at an unexpected position. Our results suggest that statistical knowledge of the relative configuration of objects is rapidly extracted from natural scenes, and that this knowledge is used to direct gaze in both detection and localization tasks. This pattern of behavior provides additional evidence for attentional mechanisms using sensory weighting based on expectations to guide eye movement behavior.
This PDF is available to Subscribers Only