Abstract
Previous studies have demonstrated that the perceived direction of motion of a visual stimulus can be decoded from the pattern of fMRI responses in occipital cortex (Kamitani and Tong, 2006). One possible mechanism for this is a difference in the sampling of direction selective columns between voxels, implying that sub-voxel information may be accessible with fMRI. To assess the possible sources of this direction-selectivity, we tested how classification accuracy varied across different visual areas and subsets of voxels for 8-way direction classification. Functional imaging data were collected using 3D-gradient-echo EPI at 7T (Achieva, Philips; SPMMR Centre, Nottingham) using 1.5 mm isotropic voxels, (volume TR 2s). In one set of analyses we tested how classification accuracy varied with the number of voxels used. We used a ‘searchlight’ technique that performs classification based on a spherically defined subsets of voxels (Kriegeskorte et al., 2006) and found classification performance above chance across several visual areas (V1–V4, V5/hMT+) and in areas of the intraparietal sulcus, with a range of searchlight sizes (radius 7.5–10.5 mm). In the second set of analyses, we looked at classification performance after combining data across different voxels within visual areas (with similar visual angle preference from retinotopy) before classifier training. Preserved classification accuracy when averaging in this way, compared to random averaging of voxels, suggests that there may be large-scale biases at the level of retinotopic maps underlying some part of our results (see also Freeman et al., 2010).