July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Role of attention, eye-movements, and landmarks in tracking an occluded object.
Author Affiliations
  • Deborah Aks
    Center for Cognitive Science, Rutgers University
  • Meriam Naqvi
    Department of Biological Sciences, Rutgers University
  • Ronald Planer
    Department of Philosophy, Rutgers University\nCenter for Cognitive Science, Rutgers University
  • Kevin Zish
    Psychology Department, George Mason University
  • Zenon Pylyshyn
    Center for Cognitive Science, Rutgers University\nPsychology Department, Rutgers University
Journal of Vision July 2013, Vol.13, 1278. doi:https://doi.org/10.1167/13.9.1278
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Deborah Aks, Meriam Naqvi, Ronald Planer, Kevin Zish, Zenon Pylyshyn; Role of attention, eye-movements, and landmarks in tracking an occluded object.. Journal of Vision 2013;13(9):1278. https://doi.org/10.1167/13.9.1278.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

We extend our 2012 VSS study of tracking a single object that moves behind an occluding surface. Observers tapped the screen when cued by a sound to indicate where the moving object was at the time of the sound. We measured gaze and touch localization when the object was either occluded or visible, with or without landmarks. We found gaze and touch undershoot target position in occlusion trials especially later in tracking (>1.9 s), and that localization was not affected by landmarks. Here we test whether the lag-bias reflects coding of object position by the eye-movement system by comparing tracking with attention (gaze constrained) vs. free eye-movements, and whether landmarks help track hidden objects when gaze is constrained. Method: Seventy-five participants tracked a (1°) square that moved (4°/s) across marked, or unmarked (5-s) trials. Marks were a row of 10 asterisks along the object’s path. A sound probe (SP) occurred randomly between 1.3 and 3.8s after the square started moving. Location response was indicated by subjects tapping their forefinger to indicate object-position (hidden on 1/2 the trials). Thirty participants tracked the square with their gaze fixated on the center of the screen, and a 2nd group of 45 participants were free to track the object with their eyes.

Results. Best localization occurred when the tracked object is visible, and early in the tracking sequence (0.3° overshoot). In occlusion trials, tap lags behind the object (4.5°) and lag increases with tracking distance. Landmarks have a different effect depending on fixation and SP delay: When centrally fixated, landmarks help localize the hidden object (lag is reduced 0.8°), but hinders localization of the hidden object when gaze moves freely (reaching 8.0° at 3.8s). We discuss how eye-movements help localize hidden objects, but when gaze is constrained, attention uses landmarks to help guide tracking.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.