Abstract
The spatial precision of cortical feedback to V1 was explored using functional magnetic resonance imaging. Feedback may be local, predicting the image point by point or global, predicting the gist. Images were presented to 12 subjects with the lower right quadrant occluded (Smith & Muckli 2010) and at three different spatially-shifted versions (0/2/8 degrees). Multivoxel patterns were extracted from the non-stimulated cortex (i.e. receiving feedback) and entered into a classifier analysis. We first classified between two images presented at the same level of spatial shift, e.g. between image 1 and image 2 both presented at 0 degrees, and secondly cross-classified between images presented at different degrees of spatial shift e.g. training the classifier on images presented at 0 degrees and testing on images presented at 2 degrees. The first analysis revealed that the region of V1 representing the occluded quarter-field carries information that can discriminate surrounding context, as we were able to classify at 76% (p = 0.0001), 68% (p = 0.0060) and 86% (p = 0.0001) between two different images presented at 0 degrees, 2 degrees and 8 degrees respectively (chance equals 50%). The second analysis revealed that the first analysis holds true only to a certain limit of spatial shift. The "occluded" portion of V1 was able to discriminate the surrounding visual context up to 2 degrees, as we were able to cross-classify images at 0 and 2 degrees (64%, p = 0.0177), but not at 0 degrees to 8 degrees (53%, p = 0.1883), or 2 degrees to 8 degrees (52%, p = 0.3190). Cross-classification performance dropped to chance level once the spatial shifts became too large. This relatively high level of precision is somewhat surprising given that anatomically feedback projections fan out, and may indicate a precise interaction between lateral connections and feedback projections.
Meeting abstract presented at VSS 2013