September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Dynamic visual localization with moving dot clouds
Author Affiliations
  • Shannon Locke
    Dept. of Psychology, New York University, New York
  • Michael Landy
    Dept. of Psychology, New York University, New York
    Center for Neural Science, New York University, New York
  • Pascal Mamassian
    Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France Département d'Études Cognitives, École Normale Supérieure, Paris, France
  • Eero Simoncelli
    Dept. of Psychology, New York University, New York
    Center for Neural Science, New York University, New York
Journal of Vision August 2017, Vol.17, 1166. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shannon Locke, Michael Landy, Pascal Mamassian, Eero Simoncelli; Dynamic visual localization with moving dot clouds. Journal of Vision 2017;17(10):1166.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Introduction: Humans and animals are often make sensory estimates in environments that are dynamic, where the feature or object of interest changes over time. This form of estimation is not well captured by traditional experiments based on binary-choice, reaction-time tasks and stimuli with fixed properties. Inspired by Bonnen et al. (2015), we explored dynamic perceptual estimation using a visuo-motor tracking task. Methods: Participants used a computer mouse to track a target, the mean of a 2D circular Gaussian distribution, as it followed a horizontal random-walk trajectory. High-contrast white dots were drawn stochastically from the distribution every 17 ms. In the main task, the mean of the distribution was not displayed and needed to be inferred from the dot cloud. A control task presented a red dot at the target location. After each 20 sec trial, points were awarded in inverse proportion to the RMSE between target and cursor positions. Model: A Bayesian ideal-observer model was constructed for this task. The estimation component was a Kalman filter that determined the ideal temporal weighting function, assuming knowledge of the true variances of the sensory information and random-walk trajectory. Plausible sensorimotor perturbations were added to the inputs/outputs of the Kalman filter, including temporal lags and additional noise. Results/Discussion: Across participants, the average estimated tracking lag was 400 ms in the main task. When this temporal lag was included in the model, we found that participants tracked the trajectory of the invisible target in a manner indistinguishable from the ideal observer. In the control task, the estimated temporal lag was 330 ms, suggesting that spatial integration and decisional factors require around 70 ms. These results demonstrate normal healthy adults are capable of optimal estimation in complex, dynamic contexts that require spatial and temporal integration as well as learning statistical information about the environment.

Meeting abstract presented at VSS 2017


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.