September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Multivariate classification of motion direction using high-field fMRI
Author Affiliations
  • Alex Beckett
    Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
  • Jonathan Peirce
    Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
  • Susan Francis
    SPMMRC, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
  • Denis Schluppeck
    Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
Journal of Vision September 2011, Vol.11, 767. doi:https://doi.org/10.1167/11.11.767
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alex Beckett, Jonathan Peirce, Susan Francis, Denis Schluppeck; Multivariate classification of motion direction using high-field fMRI. Journal of Vision 2011;11(11):767. https://doi.org/10.1167/11.11.767.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies have demonstrated that the perceived direction of motion of a visual stimulus can be decoded from the pattern of fMRI responses in occipital cortex (Kamitani and Tong, 2006). One possible mechanism for this is a difference in the sampling of direction selective columns between voxels, implying that sub-voxel information may be accessible with fMRI. To assess the possible sources of this direction-selectivity, we tested how classification accuracy varied across different visual areas and subsets of voxels for 8-way direction classification. Functional imaging data were collected using 3D-gradient-echo EPI at 7T (Achieva, Philips; SPMMR Centre, Nottingham) using 1.5 mm isotropic voxels, (volume TR 2s). In one set of analyses we tested how classification accuracy varied with the number of voxels used. We used a ‘searchlight’ technique that performs classification based on a spherically defined subsets of voxels (Kriegeskorte et al., 2006) and found classification performance above chance across several visual areas (V1–V4, V5/hMT+) and in areas of the intraparietal sulcus, with a range of searchlight sizes (radius 7.5–10.5 mm). In the second set of analyses, we looked at classification performance after combining data across different voxels within visual areas (with similar visual angle preference from retinotopy) before classifier training. Preserved classification accuracy when averaging in this way, compared to random averaging of voxels, suggests that there may be large-scale biases at the level of retinotopic maps underlying some part of our results (see also Freeman et al., 2010).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×