September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Temporal dynamics of perceiving scene-relative object motion during self-motion from optic flow
Author Affiliations
  • Long NI
    Center of Neural Science, New York University, New York, USA
  • LI LI
    Neural Science Program, New York University Shanghai, Shanghai, PR China
    Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR
Journal of Vision August 2017, Vol.17, 426. doi:https://doi.org/10.1167/17.10.426
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Long NI, LI LI; Temporal dynamics of perceiving scene-relative object motion during self-motion from optic flow. Journal of Vision 2017;17(10):426. https://doi.org/10.1167/17.10.426.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The visual system can recover scene-relative object motion by removing the self-motion component from the object's retinal motion, a process called flow parsing. However, little is known about the temporal dynamics of flow parsing. Here we addressed this question by examining how flow parsing gain (i.e., the proportion of self-motion component subtracted from the object's retinal motion) is modulated by the exposure time to optic flow. A stereo display simulated forward self-motion at 0.30m/s through a cloud of 58 red wireframe objects (depth range: 0.34m) with a yellow dot probe (diameter: 0.0025º) moving vertically in the scene. In the full-field condition, objects appeared on the entire image plane (56ºx33º), thus providing both global and local motion information around the probe. In the hemi-field condition, objects were placed on the opposite side of the probe on the image plane, thus removing local motion information around the probe. Five display durations (100ms, 200ms, 400ms, 700ms, & 1000ms) were tested. The midpoint where the probe was visible for each display duration corresponded to the same time point (900ms) in a 1000ms display, ensuring the same depth, eccentricity, and self-motion component of the probe in the cloud. A self-motion component was added to the probe's retinal motion using an adaptive staircase to determine when the probe was perceived to move vertically in the scene. Across 11 participants, the flow parsing gain remained unchanged across the five display durations for the full-field display, but decreased with duration for the hemi-field display, which was not due to the perceived heading bias as revealed by our control experiment. We conclude that both global and local motion affect the temporal dynamics of flow parsing. Flow parsing is a fast process (≤100 ms), and its accuracy appears to decrease with an increased exposure to global flow.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×