Abstract
The receptive fields of V1 neurons are driven by sensory input. V1 neurons are also modulated by cortical feedback, and this can be conceptualised as neuronal “feedback fields”. Feedback fields can be investigated in human V1 by performing multivoxel pattern analyses on voxels that respond to an occluded portion of a visual scene. We investigated the spatial precision of feedback to V1 using 3T and 7T functional magnetic resonance imaging. Images were presented to 27 (3T) and 4 (7T) subjects with the lower right quadrant occluded (Smith & Muckli 2010) and at three different spatially-shifted versions (selected from 2, 3, 4, 6, 7, 8 degrees). Multivoxel patterns were extracted from the non-stimulated cortex (i.e. receiving feedback) and entered into a classifier analysis. We tested the precision of feedback by training the classifier on images presented at 0 degrees and testing on the same two scenes but shifted by e.g. 2 degrees. The 3T data revealed that the non-stimulated portion of V1 was able to discriminate the surrounding visual context even when shifted by 4 degrees. The 7T data corroborated that we are measuring the precision of feedback, as cross-classification was only possible in layers of V1 that receive feedback, i.e. supragrangular layers. To conclude, cross-classification across spatially-shifted images was significant only in outer layers of cortex, where feedback enters V1. Furthermore, classifier performance decreased to chance level as the spatial shift increased above 4 degrees. This result is suggestive of contextual feedback to V1 from a mid-level visual area such as V4, which has receptive fields large enough to capture similar image features despite their shifting location in the visual field.
Meeting abstract presented at VSS 2015