August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Effect of Occlusion and Landmarks on Single Object Tracking During Disrupted Viewing
Author Affiliations
  • Meriam Naqvi
    Center for Cognitive Sciences, Rutgers University
  • Kevin Zish
    Center for Cognitive Sciences, Rutgers University
  • Ronald Planer
    Center for Cognitive Sciences, Rutgers University
  • Deborah Aks
    Center for Cognitive Sciences, Rutgers University
  • Zenon Pylyshyn
    Center for Cognitive Sciences, Rutgers University
Journal of Vision August 2012, Vol.12, 550. doi:https://doi.org/10.1167/12.9.550
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Meriam Naqvi, Kevin Zish, Ronald Planer, Deborah Aks, Zenon Pylyshyn; Effect of Occlusion and Landmarks on Single Object Tracking During Disrupted Viewing. Journal of Vision 2012;12(9):550. https://doi.org/10.1167/12.9.550.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The ability to extrapolate the motion of objects that move in straight paths at a fixed velocity when they go behind an occluding surface has been shown to be poor but to improve when the surface has landmarks (Pylyshyn & Cohen, ARVO 1999).i Here we extend this study by having observers tap on the screen to indicate where they believe the moving object was when it was occluded vs. when visible, and when there were landmarks along the route. We also recorded eye movements to investigate whether gaze may play a role in imagined motion-tracking. Method. Forty participants tracked a square moving from left-to-right on a display screen, and twenty tracked the square moving right-to-left (during ~20 - 5 second trials for 2-occlusion and 2-landmark conditions). Subjects selected the position of tracked (but hidden) object with their finger when signaled by a randomly presented sound probe. Results. Most accurate localization occurs when object is always visible. But when occluded, gaze and touch localization undershoot actual target position regardless of movement direction or landmark presence. Response-lag is greater for gaze, except when probe onset is brief (<1.9 s) or when subjects are familiar with motion path (e.g., when block of occluding trials precede non-occluding trials.). We will discuss how lag-bias may reflect coding of object position in eye-movement system and guide imagined localization and tracking accuracy. Furthermore, we will describe an unexpected findingii, suggesting that our eyes may serve as a "place-holder" to maintain the position of tracked non-visible objects.

i Pylyshyn, Z. W., & Cohen, J. (May 1999). Imagined extrapolation of uniform motion is not continuous. Paper presented at the Annual Conference of the Association for Research in Vision and Ophthalmology, Ft. Lauderdale, FL.

ii Immediately following sound probe eyes tend to stay in approximate position until response is made.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×