September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Eye-movement dynamics of object-tracking
Author Affiliations
  • Omar Elfanagely
    Rutgers Department of Biological Sciences
    Rutgers Department of Psychology
    Rutgers Center for Cognitive Sciences (RuCCs)
  • Harry Haladjian
    Rutgers Department of Biological Sciences
    Rutgers Center for Cognitive Sciences (RuCCs)
  • Deborah Aks
    Rutgers Center for Cognitive Sciences (RuCCs)
  • Hristiyan Kourtev
    Rutgers Center for Cognitive Sciences (RuCCs)
  • Pylyshyn Zenon
    Rutgers Department of Biological Sciences
    Rutgers Center for Cognitive Sciences (RuCCs)
Journal of Vision September 2011, Vol.11, 280. doi:https://doi.org/10.1167/11.11.280
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Omar Elfanagely, Harry Haladjian, Deborah Aks, Hristiyan Kourtev, Pylyshyn Zenon; Eye-movement dynamics of object-tracking. Journal of Vision 2011;11(11):280. https://doi.org/10.1167/11.11.280.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Tracking requires maintaining a link to individual objects as they move around. There is no need to maintain a record of object position over time; all that is needed is maintaining a connection, or index, to target items as they move (Pylyshyn, 2004) . Yet, how well we maintain links is undoubtedly reflected in tracking behaviors. Both the time course and pattern of eye-scanning used in multiple object tracking (MOT) may help us understand how humans track objects. By analyzing MOT dynamics, we explore why better tracking occurs when objects halt during their disappearance (Keane & Pylyshyn, 2006), and how the visual system maintains a memory of prior object-positions. We use the MOT task described in (Alley et al., 2011), and “gaze-to-item” analysis measuring relative distance between eye-positions and each of 8 changing item positions (4 are tracked targets). We also use R ecurrence Quantification Analysis (RQA) to determine whether recurring eye-movement patterns play a role (Webber & Zbilut, 1994). How smooth and repetitive are gaze paths? Fehd and Seiffert (2008) report that gaze follows the center of a group of targets, and that this “centroid” strategy reflects tracking a global object formed by grouping. This leads to a prediction that such a “center-looking strategy” should be smooth since the centroid moves with the average instantaneous position of independently moving objects . However, among gaze dynamic patterns that we found, one surprising result is the pervasiveness of switching gaze across items. Such frequent switching occurs spontaneously, and under crowding conditions, and is consistent with the alternative indexing account that individuated objects are tracked separately. By focusing only on aggregated positions, we may be masking important dynamics. Perhaps most significant are recursive scan paths of which switching behavior is a critical component. This may reflect iterative coding for sequences of prior object positions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×