August 2009
Volume 9, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2009
Reading the mind's eye: Decoding object information during mental imagery from fMRI patterns
Author Affiliations
  • Thomas Serre
    McGovern Institute, MIT, Cambridge, MA
  • Leila Reddy
    CerCo-CNRS, Universite Paul Sabatier, Toulouse, France
  • Naotsugu Tsuchyia
    Computation & Neural Systems, California Institute of Technology, Pasadena, CA
  • Tomaso Poggio
    McGovern Institute, MIT, Cambridge, MA
  • Michele Fabre-Thorpe
    CerCo-CNRS, Universite Paul Sabatier, Toulouse, France
  • Christof Koch
    Computation & Neural Systems, California Institute of Technology, Pasadena, CA
Journal of Vision August 2009, Vol.9, 782. doi:https://doi.org/10.1167/9.8.782
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Thomas Serre, Leila Reddy, Naotsugu Tsuchyia, Tomaso Poggio, Michele Fabre-Thorpe, Christof Koch; Reading the mind's eye: Decoding object information during mental imagery from fMRI patterns. Journal of Vision 2009;9(8):782. https://doi.org/10.1167/9.8.782.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Studies have shown that category information for visually presented objects can be read out from fMRI patterns of brain activation spread over several millimeters of cortex. What is the nature and reliability of these activation patterns in the absence of any bottom-up visual input, for example, during visual imagery? Previous work has shown that the BOLD response under conditions of visual imagery is lower and limited to a smaller number of voxels, suggesting that activation patterns during imagery could be substantially different from those observed during actual viewing. We here ask first, how well category information can be read-out for imagined objects and second, how the representations of imagined objects compare to those obtained during actual viewing.

These questions were addressed in an fMRI study where four categories were used - faces, buildings, tools and vegetables - in two conditions. In the V-condition, subjects were presented with images and in the I-condition they imagined them. Using pattern classification techniques we found, as previously, that object category information could be reliably decoded in the V-condition, both from category selective regions (i.e. FFA and PPA), as well as more distributed patterns of fMRI activity in object selective voxels of ventral temporal cortex. Good classification performance was also observed in the I-condition indicating that object representations in visual areas in the absence of any bottom-up signals provide information about imagined objects. Interestingly, when the pattern classifier was trained on the V-condition and tested on the I-condition (or vice-versa), classification performance was comparable to the I-condition suggesting that the patterns of neural activity during imagery and actual viewing are surprisingly similar to each other.

Overall these results provide strong constraints for computational theories of vision suggesting that in the absence of bottom-up input, cortical backprojections can selectively re-activate specific patterns of neural activity.

Serre, T. Reddy, L. Tsuchyia, N. Poggio, T. Fabre-Thorpe, M. Koch, C. (2009). Reading the mind's eye: Decoding object information during mental imagery from fMRI patterns [Abstract]. Journal of Vision, 9(8):782, 782a, http://journalofvision.org/9/8/782/, doi:10.1167/9.8.782. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×