July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Investigating the spatial precision of cortical feedback using fMRI
Author Affiliations
  • Lucy S. Petro
    Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow
  • Fraser W. Smith
    Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow
  • Victoria Shellia
    Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow
  • Lars Muckli
    Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow
Journal of Vision July 2013, Vol.13, 1065. doi:10.1167/13.9.1065
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Lucy S. Petro, Fraser W. Smith, Victoria Shellia, Lars Muckli; Investigating the spatial precision of cortical feedback using fMRI. Journal of Vision 2013;13(9):1065. doi: 10.1167/13.9.1065.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The spatial precision of cortical feedback to V1 was explored using functional magnetic resonance imaging. Feedback may be local, predicting the image point by point or global, predicting the gist. Images were presented to 12 subjects with the lower right quadrant occluded (Smith & Muckli 2010) and at three different spatially-shifted versions (0/2/8 degrees). Multivoxel patterns were extracted from the non-stimulated cortex (i.e. receiving feedback) and entered into a classifier analysis. We first classified between two images presented at the same level of spatial shift, e.g. between image 1 and image 2 both presented at 0 degrees, and secondly cross-classified between images presented at different degrees of spatial shift e.g. training the classifier on images presented at 0 degrees and testing on images presented at 2 degrees. The first analysis revealed that the region of V1 representing the occluded quarter-field carries information that can discriminate surrounding context, as we were able to classify at 76% (p = 0.0001), 68% (p = 0.0060) and 86% (p = 0.0001) between two different images presented at 0 degrees, 2 degrees and 8 degrees respectively (chance equals 50%). The second analysis revealed that the first analysis holds true only to a certain limit of spatial shift. The "occluded" portion of V1 was able to discriminate the surrounding visual context up to 2 degrees, as we were able to cross-classify images at 0 and 2 degrees (64%, p = 0.0177), but not at 0 degrees to 8 degrees (53%, p = 0.1883), or 2 degrees to 8 degrees (52%, p = 0.3190). Cross-classification performance dropped to chance level once the spatial shifts became too large. This relatively high level of precision is somewhat surprising given that anatomically feedback projections fan out, and may indicate a precise interaction between lateral connections and feedback projections.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×