Abstract
The ability to recognize the location of objects in three-dimensional space is a key component of the human visual system, supporting complex behaviors such as navigation through the environment and the guidance of eye and limb movements. The distance of an object from an observer, or the object depth, is determined at least in part from binocular disparity information which is represented within several visual areas including V3A (Goncalves, 2015). However, the role of these areas in representing spatial locations in 3D space has not yet been determined. Here, we used analysis of BOLD fMRI activation patterns to determine how various retinotopically-defined visual areas represent information about the location of a stimulus in three-dimensional space. During imaging, participants viewed 3D spheres composed of multicolored flickering dots positioned at various locations in a horizontal plane, using binocular disparity goggles to generate an illusion of depth. Based on multivariate voxel activation patterns in areas V3A and IPS0, a linear classifier was able to categorize the depth position of a stimulus with above-chance accuracy, but activation patterns in V1 only reliably supported classification of the horizontal position. Furthermore, using an image reconstruction technique (inverted encoding model; see Sprague & Serences, 2013), we were successfully able to reconstruct an image of the stimulus viewed based on region-wide voxel activation patterns in V3A and IPS0. In contrast, early visual areas did not appear to represent information about the depth position of a stimulus but did carry information about horizontal position. These findings demonstrate that several visual areas contain representations of 3D spatial locations including depth, and may provide some insight into the hierarchy of spatial location encoding in the human visual system.
Meeting abstract presented at VSS 2016