Purchase this article with an account.
Thomas Serre, Leila Reddy, Naotsugu Tsuchyia, Tomaso Poggio, Michele Fabre-Thorpe, Christof Koch; Reading the mind's eye: Decoding object information during mental imagery from fMRI patterns. Journal of Vision 2009;9(8):782. doi: 10.1167/9.8.782.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Studies have shown that category information for visually presented objects can be read out from fMRI patterns of brain activation spread over several millimeters of cortex. What is the nature and reliability of these activation patterns in the absence of any bottom-up visual input, for example, during visual imagery? Previous work has shown that the BOLD response under conditions of visual imagery is lower and limited to a smaller number of voxels, suggesting that activation patterns during imagery could be substantially different from those observed during actual viewing. We here ask first, how well category information can be read-out for imagined objects and second, how the representations of imagined objects compare to those obtained during actual viewing.
These questions were addressed in an fMRI study where four categories were used - faces, buildings, tools and vegetables - in two conditions. In the V-condition, subjects were presented with images and in the I-condition they imagined them. Using pattern classification techniques we found, as previously, that object category information could be reliably decoded in the V-condition, both from category selective regions (i.e. FFA and PPA), as well as more distributed patterns of fMRI activity in object selective voxels of ventral temporal cortex. Good classification performance was also observed in the I-condition indicating that object representations in visual areas in the absence of any bottom-up signals provide information about imagined objects. Interestingly, when the pattern classifier was trained on the V-condition and tested on the I-condition (or vice-versa), classification performance was comparable to the I-condition suggesting that the patterns of neural activity during imagery and actual viewing are surprisingly similar to each other.
Overall these results provide strong constraints for computational theories of vision suggesting that in the absence of bottom-up input, cortical backprojections can selectively re-activate specific patterns of neural activity.
This PDF is available to Subscribers Only