August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Reconstructing 3D stimuli using BOLD activation patterns recovers hierarchical depth processing in human visual and parietal cortex
Author Affiliations
  • Margaret Henderson
    University of California, San Diego
  • Chaipat Chunharas
    University of California, San Diego
  • Vy Vo
    University of California, San Diego
  • Thomas Sprague
    University of California, San Diego
  • John Serences
    University of California, San Diego
Journal of Vision September 2016, Vol.16, 298. doi:10.1167/16.12.298
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Margaret Henderson, Chaipat Chunharas, Vy Vo, Thomas Sprague, John Serences; Reconstructing 3D stimuli using BOLD activation patterns recovers hierarchical depth processing in human visual and parietal cortex . Journal of Vision 2016;16(12):298. doi: 10.1167/16.12.298.

      Download citation file:


      © 2017 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

The ability to recognize the location of objects in three-dimensional space is a key component of the human visual system, supporting complex behaviors such as navigation through the environment and the guidance of eye and limb movements. The distance of an object from an observer, or the object depth, is determined at least in part from binocular disparity information which is represented within several visual areas including V3A (Goncalves, 2015). However, the role of these areas in representing spatial locations in 3D space has not yet been determined. Here, we used analysis of BOLD fMRI activation patterns to determine how various retinotopically-defined visual areas represent information about the location of a stimulus in three-dimensional space. During imaging, participants viewed 3D spheres composed of multicolored flickering dots positioned at various locations in a horizontal plane, using binocular disparity goggles to generate an illusion of depth. Based on multivariate voxel activation patterns in areas V3A and IPS0, a linear classifier was able to categorize the depth position of a stimulus with above-chance accuracy, but activation patterns in V1 only reliably supported classification of the horizontal position. Furthermore, using an image reconstruction technique (inverted encoding model; see Sprague & Serences, 2013), we were successfully able to reconstruct an image of the stimulus viewed based on region-wide voxel activation patterns in V3A and IPS0. In contrast, early visual areas did not appear to represent information about the depth position of a stimulus but did carry information about horizontal position. These findings demonstrate that several visual areas contain representations of 3D spatial locations including depth, and may provide some insight into the hierarchy of spatial location encoding in the human visual system.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×