August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Single-trial fMRI decoding of 3D motion based on stereoscopic and perspective cues
Author Affiliations & Notes
  • Puti Wen
    New York University Abu Dhabi
  • Michael Landy
    New York University
  • Bas Rokers
    New York University
    New York University Abu Dhabi
  • Footnotes
    Acknowledgements  Aspire Precision Medicine Virtual Research Institute (BR), EY08266 (MSL)
Journal of Vision August 2023, Vol.23, 5740. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Puti Wen, Michael Landy, Bas Rokers; Single-trial fMRI decoding of 3D motion based on stereoscopic and perspective cues. Journal of Vision 2023;23(9):5740.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

The visual system exploits multiple cues to estimate 3D motion, including binocular motion cues (changing disparity and inter-ocular velocity differences) as well as monocular perspective cues (looming and optic flow). We sought to understand where in cortex these cues are represented and integrated. To do this, we measured brain activity using fMRI and attempted to decode 3D motion (toward vs. away) in multiple brain areas using stimuli that isolated individual cues. The stimuli were random dots moving directly toward or away from the observer. There were four conditions: binocular, monocular left eye, monocular right eye and combined. In the binocular condition, dots moved opposite horizontally in the two eyes. In the monocular conditions, dot fields expanded and contracted to signal motion-in-depth in either the left eye or right eye alone. The combined condition contained both the binocular and monocular cues. We estimated BOLD response amplitudes using the GLMsingle toolbox and found three major clusters in which the BOLD response was well predicted by the stimulus sequence – in early visual areas V1-3, the motion complex MT+ (V4t, MT, MST, FST), and in the intraparietal sulcus (IPS0, VIP, and LIP). We then decoded 3D motion direction on a trial-by-trial basis using Random Forest and Support Vector Machines. Monocular cues produced best decoding accuracy in V1, followed by IPS0 and MST. In the binocular condition performance was worse in V1 compared to V3A and FST. The combined cue resulted in best performance in V3A and worst in V1. The results suggest that both perspective and stereoscopic cues contribute independently to 3D motion perception and highlight a network of regions in visual cortex that are involved in its computation.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.