September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
"Depth-otopic" mapping of human visual cortex
Author Affiliations
  • Julie Golomb
    Department of Psychology, The Ohio State University
  • Daniel Berman
    Department of Psychology, The Ohio State University
  • Nonie Finlayson
    Department of Psychology, The Ohio State University
    Department of Experimental Psychology, University College London
Journal of Vision August 2017, Vol.17, 586. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Julie Golomb, Daniel Berman, Nonie Finlayson; "Depth-otopic" mapping of human visual cortex. Journal of Vision 2017;17(10):586.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

We live in a three-dimensional world, but most studies of human visual cortex focus on 2D visual representations. In a recent study, we revealed that visual cortex gradually transitions from 2D-dominant representations to balanced 3D (2D plus depth) representations along the visual hierarchy, with position-in-depth information decoded along with 2D spatial information in a number of intermediate to higher-level visual areas, including V3A, V3B, V7, MT, and LOC (Finlayson, Zhang, & Golomb, forthcoming). But what is the nature of these position-in-depth representations? Do these regions contain topographic maps of depth in addition to 2D retinotopic maps? To explore this question, we developed two novel "depth-otopic" mapping paradigms, modifying traditional 2D phase-encoded (ring/wedge: Engel et. al., 1994; Sereno et. al., 1995) and population receptive field modeling (pRF: Dumoulin & Wandell, 2008) techniques. Subjects viewed 3D stimuli in the scanner while wearing red/green anaglyph glasses. Full-field random dot motion stimuli were presented in sequences of gradually shifting cycles (phase-encoded experiment) and sequences (pRF experiment). We estimated each voxel's preferred position-in-depth and modeled its tuning function. Within regions sensitive to depth information, voxels were clustered together exhibiting similar position-in-depth preferences and tuning functions. Interestingly, most of the strongest-tuned voxels tended to exhibit a preference for near (front) depths. Broader tuning was found in early visual cortex, where position-in-depth was less able to be decoded. Depth preference patterns were highly reliable within individuals, but exhibited substantial variability across subjects. The results suggest that depth-selective voxels are not randomly distributed, yet do not appear to form a strict map-like organization akin to 2D retinotopic maps.

Meeting abstract presented at VSS 2017


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.