September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Temporal dynamics of heading perception and identification of scene-relative object motion from optic flow
Author Affiliations & Notes
  • Li Li
    Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
  • Mingyang Xie
    Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
    Institute of Cognitive Neuroscience, East China Normal University, Shanghai, PRC
Journal of Vision September 2019, Vol.19, 236c. doi:https://doi.org/10.1167/19.10.236c
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Li Li, Mingyang Xie; Temporal dynamics of heading perception and identification of scene-relative object motion from optic flow. Journal of Vision 2019;19(10):236c. https://doi.org/10.1167/19.10.236c.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

During self-motion, the visual system can perceive the direction of self-motion (heading) and identify scene-relative object motion from optic flow (flow parsing). However, little is known about temporal dynamics of heading perception and flow parsing. Here we addressed this question by examining how the accuracy of heading perception and flow parsing changes with exposure time to optic flow. A stereo display simulated forward translation at 0.3m/s through a cloud of 58 red wireframe objects (depth: 0.69–1.03m) placed on one side of the image plane (56°×33°). Five display durations (100ms, 200ms, 400ms, 700ms, & 1000ms) were tested. For heading perception, on each trial, heading was randomly chosen from −10° (left) to 10° (right). Participants were asked to indicate their perceived heading using a mouse-controlled probe at the end of the trial. For flow parsing, on each trial, heading was fixed at 0° and a yellow dot probe (diameter: 0.25°; depth: 0.86m) moved vertically for 100ms in the scene. Objects were placed on the opposite side of the probe in the image plane to remove local motion cues around the probe. The speed (2°/s) and eccentricity (4°) at the midpoint of the probe’s motion were equated across display durations. A nulling motion component was added to the probe’s motion using an adaptive staircase to determine when participants perceived the probe to move vertically in the scene. This added motion component was used to compute the accuracy of flow parsing. Across 12 participants, while the accuracy of heading perception increased with exposure time, the accuracy of flow parsing decreased with exposure time to optic flow. The opposite trend of temporal dynamics of heading perception and flow parsing suggests that although these two processes both rely on optic flow, they involve separate neural substrates that compete for the same limited attention resource.

Acknowledgement: Supported by research grants from the National Natural Science Foundation of China (31741061), Shanghai Science and Technology Committee (15DZ2270400, 17ZR1420100), and NYU-ECNU Joint Research Institute at NYU Shanghai 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×