September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
A computational model of the perisaccadic updating of spatial attention
Author Affiliations
  • Michael Teichmann
    Department of Computer Science, Chemnitz University of Technology
  • Julia Schuster
    Department of Computer Science, Chemnitz University of Technology
  • Fred Hamker
    Department of Computer Science, Chemnitz University of Technology
Journal of Vision September 2015, Vol.15, 69. doi:10.1167/15.12.69
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Michael Teichmann, Julia Schuster, Fred Hamker; A computational model of the perisaccadic updating of spatial attention. Journal of Vision 2015;15(12):69. doi: 10.1167/15.12.69.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

During natural vision, scene perception depends on accurate targeting of attention, anticipation of the physical consequences of motor actions, and the ability to continuously integrate visual inputs with stored representations. For example, when there is an impending eye movement, the visual system anticipates where the target will be next and, for this, attention updates to the new location. Recently, two different types of perisaccadic spatial attention shifts were discovered. One study shows that attention lingers after saccade at the (irrelevant) retinotopic position, that is, the focus of attention shifts with the eyes and updates not before the eyes land to its original position (Golomb et al., 2008, J Neurosci.; Golomb et al., 2010, J Vis.). Another study shows that shortly before saccade onset, spatial attention is remapped to a position opposite to the saccade direction, thus, anticipating the eye movement (Rolfs et al., 2011, Nat Neurosci.). Recently, we proposed a model of perisaccadic perception based on predictive remapping and corollary discharge signals to explain several phenomena of vision (Ziesche & Hamker, 2011, J Neurosci.; Ziesche & Hamker, 2014, Front. Comput. Neurosci.). The model allows for the simulation of stimulus position across eye movements by using a discrete eye position signal and a corollary discharge. We extended the model with an additional feedback loop and a tonic spatial attention signal and show that the observations made by Golomb et al. and Rolfs et al. are not contradictory and emerge through the model dynamics. The former is explained by the proprioceptive eye position signal and the latter by the corollary discharge signal. Interestingly, both eye-related signals are core ingredients of the model and are required to explain data from mislocalization and displacement detection experiments. Thus, our model provides a comprehensive framework to discuss multiple experimental observations that occur around saccades.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×