August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Learning to direct attention in space and time
Author Affiliations & Notes
  • Zhenzhen Xu
    Vrije Universiteit Amsterdam
  • Sander A. Los
    Institute Brain and Behavior Amsterdam (iBBA)
  • Jan Theeuwes
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
  • Footnotes
    Acknowledgements  J.T. and Z.X. were supported by a European Research Council (ERC) advanced grant 833029 – [LEARNATTEND]. Z.X. was also supported by a CSC scholarship (No.201906990033).
Journal of Vision August 2023, Vol.23, 5266. doi:https://doi.org/10.1167/jov.23.9.5266
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhenzhen Xu, Sander A. Los, Jan Theeuwes; Learning to direct attention in space and time. Journal of Vision 2023;23(9):5266. https://doi.org/10.1167/jov.23.9.5266.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Our ability to attend selectively to the world around us - attending things that matter and ignoring things that may distract us - is crucial for interacting with our environment. Attending a bicyclist on a collision course while ignoring bicyclists that will never hit us are crucial attentional selection processes. Recently, it was pointed out that through experiences we learn the regularities present in the environment, which in turn biases the way we select information. Learning to extract the regularities in the environment is known as visual statistical learning (VSL), which is claimed to be largely unconscious, implicit, and unintentional. The current study focused on how we learn to expect particular events occurring at particular moments in time. Specifically, we investigated how the learned spatiotemporal distribution of events guides attention to particular locations at particular moments in time. The results showed that participants can learn to suppress --at particular moments in time-- those locations that were likely to contain a distractor and enhance --at particular moments in time-- those locations that are likely to contain a target. Overall, these results indicate that implicitly learned spatiotemporal regularities dynamically guide visual attention towards the probable target location and away from probable distractor locations. To reveal how attentional suppression and enhancement unfolds over time, we measured the occurrence of micro-saccades as the propensity of these small fixational gaze shifts (micro-saccades) represents a proxy of the covert attentional shifts. The direction of these micro-saccades changed across time provided an index of how attentional orienting changed with time, revealing how the attentional system adapts to the temporally changing contingencies in the environment. The current research provides fundamental knowledge about how we learn to deal so effectively with so many objects appearing at different locations and moments in time.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×