September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
3D motion direction estimation
 – Model predictions and data
Author Affiliations
  • Kathryn Bonnen
    Institute for Neuroscience, University of Texas at AustinCenter for Perceptual Systems, University of Texas at Austin
  • Thaddeus Czuba
    Department of Psychology, University of Texas at AustinCenter for Perceptual Systems, University of Texas at Austin
  • Jake Whritner
    Department of Psychology, University of Texas at AustinCenter for Perceptual Systems, University of Texas at Austin
  • Austin Kuo
    Center for Perceptual Systems, University of Texas at Austin
  • Alexander Huk
    Institute for Neuroscience, University of Texas at AustinDepartment of Psychology, University of Texas at Austin
  • Lawrence Cormack
    Institute for Neuroscience, University of Texas at AustinDepartment of Psychology, University of Texas at Austin
Journal of Vision September 2018, Vol.18, 130. doi:https://doi.org/10.1167/18.10.130
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kathryn Bonnen, Thaddeus Czuba, Jake Whritner, Austin Kuo, Alexander Huk, Lawrence Cormack; 3D motion direction estimation
 – Model predictions and data. Journal of Vision 2018;18(10):130. https://doi.org/10.1167/18.10.130.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We have recently developed a neural model for coding 3D motion direction in primate area MT. By incorporating the geometry of retinal projection, it encodes motion direction with a bank of strikingly non-Gaussian tuning functions. The model makes surprising predictions about how performance should change as a function of stimulus location (i.e. across viewing distance and eccentricity). In this work, we used a motion direction estimation task to test these predictions. We manipulated viewing distance (20cm, 31cm, or 67cm) across blocks of trials. In order to manipulate viewing distance precisely at such short distances, we built a rear-projection system mounted on rails (ProPixx 3D projector; Screen Tech ST-PRO-DCF) that can be easily adjusted for viewing distances from 20cm to 270cm with a head-fixed subject. During each trial (1s), a spherical volume of low-contrast light and dark dot stimuli were rendered with full stereoscopic cues (disparity, expansion, and size-change) moving at one of three speeds (5cm/s, 7.75cm/s, or 16.75cm/s). The stimulus volume was three-dimensionally scaled for each viewing distance to maintain a consistent 5° visual angle (1.75cm, 2.70cm, 5.85cm diameter, respectively). Subjects reported the perceived 3D direction of motion using a physical knob to adjust the angle of a stereoscopic response arrow also rendered in the virtual 3D space. Direction estimation error varied sinusoidally as a function of motion direction, consistent with a frontoparallel motion bias. Crucially—and as predicted by the model—subjects often confused the sign of the z-axis (depth) component of the 3D motion direction, and this effect increased with increased viewing distance. Taken together, these results support the notion that that 3D motion perception performance is dependent on motion direction, viewing distance, and environmental speed as predicted by our model of encoding in MT.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×