September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Cortical representations of object motion trajectories in 3D space
Author Affiliations
  • Hiroshi Ban
    Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka, Japan Graduate School of Frontier Biosciences, Osaka University, Osaka, Japan
  • Yuji Ikegaya
    Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka, Japan Laboratory of Chemical Pharmacology, Graduate School of Pharmaceutical Sciences, The University of Tokyo, Tokyo, Japan
Journal of Vision September 2015, Vol.15, 1391. doi:10.1167/15.12.1391
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Hiroshi Ban, Yuji Ikegaya; Cortical representations of object motion trajectories in 3D space. Journal of Vision 2015;15(12):1391. doi: 10.1167/15.12.1391.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

One of the fundamental challenges of our visual system in tracking moving objects is to identify their dynamically changing locations in a three-dimensional (3D) space reconstructed from two-dimensional (2D) retinal inputs. The binocular disparities, differences of retinal inputs between left and right eyes, are known to play a key role in reconstructing 3D structures. However, even without binocular disparities, human can perceive 3D from monocular (pictorial) depth cues alone. For instance, we can identify object 3D locations based on shadows of the objects casted on the floor. Despite recent advances of neuroscience, the cortical processing of such pictorial cues has remained unclear. Here, we investigated how and where in the brain pictorial casting shadow cues are represented and converted to depth. To this end, we measured and compared fMRI responses (Siemens 3T MR, TR: 2000 ms, voxel size: 3x3x3 mm3, n=10 healthy adults) evoked by object motion trajectories in 3D defined by casting shadows on the floor (SHADOW) and by binocular disparities interposed on the target motions (DISP). The object motions caused strong retinotopic responses over the occipital cortex, but detailed analyses showed that a few specific regions represented the trajectories in 3D. More specifically, a multi-voxel pattern transfer classification analysis (e.g. we trained the classifier with SHADOW dataset whereas we tested the decoder performance with DISP dataset) revealed that the 3D trajectories defined by SHADOW were translated into those defined by DISP in V1, especially in its retinotopic sub-region corresponding to the trajectories, as well as in middle-temporal motion-sensitive area MT. The results indicate that V1 has capabilities to represent the visual target 3D locations irrelevant to depth cue types, contrary to a general assumption that V1 represents 2D space retinotopically. The fine-scale 3D representations in V1 may help us to interact with moving objects in dynamic visual environments.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×