August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Peripheral vision contributions to contextual cueing
Author Affiliations
  • Stefan Pollmann
    Department of Experimental Psychology, Otto-von-Guericke-University, Magdeburg, Germany
  • Jonathan Napp
    Department of Experimental Psychology, Otto-von-Guericke-University, Magdeburg, Germany
  • Klaus Toennies
    Department of Simulation and Graphics, Otto-von-Guericke-University, Magdeburg, Germany
  • Franziska Geringswald
    Department of Experimental Psychology, Otto-von-Guericke-University, Magdeburg, Germany
Journal of Vision September 2016, Vol.16, 987. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stefan Pollmann, Jonathan Napp, Klaus Toennies, Franziska Geringswald; Peripheral vision contributions to contextual cueing. Journal of Vision 2016;16(12):987. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Contextual cueing leads to faster search times in repeated displays. The global layout of a search display facilitates search in repeated displays (Brady & Chun, 2007). However, peripheral vision can only convey a limited amount of information about the environment. We used a model of visual summary statistics (Portilla & Simoncelli, 2000; Balas 2006) to investigate the contribution of peripheral vision to contextual cueing. Summary statistics were calculated over areas that grew in size with eccentricity (Freeman & Simoncelli, 2011), simulating the receptive field sizes in visual area V2. The contribution of peripheral vision was tested with brief previews (150 ms) of the viewpoint-dependent synthesized full-field metameric model displays or original displays prior to search for a target T among L-shapes that was restricted to a gaze-contingent tunnel abolishing peripheral vision and, thereby, preventing context learning (Geringswald & Pollmann, 2015). Preview benefits were measured in a subsequent test phase with full-field search in the same repeated and novel original displays. In Experiment 1, we investigated whether a preview of 20 different model displays representing the summary statistics of the same original display leads to the same size of contextual cueing as previewing the same original display 20 times. Both original and model previews led to significant contextual cueing of comparable size. In Experiment 2, we examined the contribution of form and location information represented in the summary statistics by comparing the full model displays from Experiment 1 to models of displays, in which L and T shapes were replaced by identical crosses, eliminating form information. Both full models and spatial models led to significant contextual cueing, with no significant difference between conditions. Thus, visual summary statistics modeling receptive field sizes in V2 are sufficient to generate contextual cueing in visual search, even if they contain only the spatial layout of a display.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.