September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Independent Contributions of Multiple Types of Scene Context on Eye Movement Guidance and Visual Search Performance
Author Affiliations
  • Kathryn Koehler
    Department of Psychological and Brain Sciences, University of California, Santa Barbara
  • Miguel Eckstein
    Department of Psychological and Brain Sciences, University of California, Santa Barbara
Journal of Vision September 2015, Vol.15, 756. doi:10.1167/15.12.756
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kathryn Koehler, Miguel Eckstein; Independent Contributions of Multiple Types of Scene Context on Eye Movement Guidance and Visual Search Performance. Journal of Vision 2015;15(12):756. doi: 10.1167/15.12.756.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We search for objects in environments that provide multiple cues to guide our attention. Contextual information facilitates object recognition and guides eye movement behavior (Biederman, 1972; Loftus & Mackworth, 1978; but see Henderson & Hollingworth, 1999). Current research either does not precisely define different types of contextual information (Greene, 2013) and/or focuses specifically on one type of context (e.g., object co-occurrence: Mack & Eckstein, 2011; scene gist: Torralba et al., 2006). In this work we define three types of contextual information (object co-occurrence, multiple object configuration, and background category) and assess their independent contributions to eye movements and behavioral performance during visual search. Eye-tracked participants (n=160) completed a yes/no task after searching for a target object in 48 realistic, computer rendered scenes that contained all, none, or any combination of each of the three types of contextual information. Retinal eccentricity and local contrast of the target and background was controlled across conditions. The type of contextual information had a significant effect on the detectability of the target (F = 9.1, p< .001) and on the distance of the closest fixation to the target location (F = 46.24, p< .001). Object co-occurrence and multiple object configuration contribute to performance effects (each p< .05 when compared to the no context condition), whereas background category failed to affect performance in all comparison cases p>.05). Analysis of the sensitivity (d’) difference between conditions with individual and combined contextual information failed to reject the hypothesis that the independent contributions of each type of context were additive. Our results suggest that multiple types of contextual information contribute independently to visual search performance and that they may do so in an additive way. This suggests that it will be useful to taxonomize types of scene context and improve the precision of definitions of contextual information in future research.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×