August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Searching near and far: The attentional template incorporates viewing distance
Author Affiliations & Notes
  • Surya Gayet
    Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
    Helmholtz Institute, Experimental Psychology, Utrecht University, Utrecht, The Netherlands
  • Elisa Battistoni
    Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
  • Sushrut Thorat
    Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
    Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
  • Marius Peelen
    Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
  • Footnotes
    Acknowledgements  Funding: VENI grant (191G.085) from the Dutch Research Council (NWO) to Surya Gayet, and Consolidator grant (grant agreement No 725970) from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme to Marius Peelen.
Journal of Vision August 2023, Vol.23, 4686. doi:https://doi.org/10.1167/jov.23.9.4686
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Surya Gayet, Elisa Battistoni, Sushrut Thorat, Marius Peelen; Searching near and far: The attentional template incorporates viewing distance. Journal of Vision 2023;23(9):4686. https://doi.org/10.1167/jov.23.9.4686.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans are remarkably proficient at finding objects in cluttered environments. A widespread explanation for this, is that observers generate a representation of the search target (or ‘attentional template’), which guides spatial attention towards target-like visual input. Any object, however, can produce vastly different visual input depending on its exact location; your car will produce a retinal image that is ten times smaller when it’s parked fifty compared to five meters away. Across four behavioral experiments, we investigated whether observers take viewing distance into account when searching for familiar object categories. On each trial, participants were pre-cued to search for a car or person in the near or far plane of an outdoor scene. In ‘search trials’, the scene reappeared and participants had to indicate whether the search target was present or absent. In intermixed ‘catch-trials’, two silhouettes were briefly presented on either side of fixation (matching the shape and/or predicted size of the search target), one of which was followed by a probe-stimulus. We found that participants were more accurate at reporting the location (Exp. 1&2) and orientation (Exp. 4) of probe-stimuli when they were presented at the location of size-matching silhouettes. Thus, attentional templates incorporate contextual predictions about object appearance (e.g., size as inferred from viewing distance). This was only the case, however, when silhouettes also matched the shape of the search target (Exp 1&3). We conclude that canonical attributes of an object (shape) are necessary for contextual attributes (size) to guide the allocation of attention.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×