Abstract
We live in a three-dimensional world, but most studies of human visual cortex focus on 2D visual representations. In a recent study, we revealed that visual cortex gradually transitions from 2D-dominant representations to balanced 3D (2D plus depth) representations along the visual hierarchy, with position-in-depth information decoded along with 2D spatial information in a number of intermediate to higher-level visual areas, including V3A, V3B, V7, MT, and LOC (Finlayson, Zhang, & Golomb, forthcoming). But what is the nature of these position-in-depth representations? Do these regions contain topographic maps of depth in addition to 2D retinotopic maps? To explore this question, we developed two novel "depth-otopic" mapping paradigms, modifying traditional 2D phase-encoded (ring/wedge: Engel et. al., 1994; Sereno et. al., 1995) and population receptive field modeling (pRF: Dumoulin & Wandell, 2008) techniques. Subjects viewed 3D stimuli in the scanner while wearing red/green anaglyph glasses. Full-field random dot motion stimuli were presented in sequences of gradually shifting cycles (phase-encoded experiment) and sequences (pRF experiment). We estimated each voxel's preferred position-in-depth and modeled its tuning function. Within regions sensitive to depth information, voxels were clustered together exhibiting similar position-in-depth preferences and tuning functions. Interestingly, most of the strongest-tuned voxels tended to exhibit a preference for near (front) depths. Broader tuning was found in early visual cortex, where position-in-depth was less able to be decoded. Depth preference patterns were highly reliable within individuals, but exhibited substantial variability across subjects. The results suggest that depth-selective voxels are not randomly distributed, yet do not appear to form a strict map-like organization akin to 2D retinotopic maps.
Meeting abstract presented at VSS 2017