September 2011
Volume 11, Issue 11
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Tuning of human occipitotemporal cortex to sensory, semantic and emotional features during visualisation
Author Affiliations
  • Daniel Mitchell
    MRC Cognition and Brain Sciences Unit, Cambridge, UK
  • Rhodri Cusack
    MRC Cognition and Brain Sciences Unit, Cambridge, UK
Journal of Vision September 2011, Vol.11, 1120. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniel Mitchell, Rhodri Cusack; Tuning of human occipitotemporal cortex to sensory, semantic and emotional features during visualisation. Journal of Vision 2011;11(11):1120.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Mental imagery has fascinated philosophers, scientists and the public alike since antiquity. Neuroimaging now offers a window into this most personal of experiences. Multi-voxel pattern analysis (MVPA) of fMRI signal has successfully distinguished between a small number of imagined items/categories, and the neural activity pattern during visualization of an object has been shown to share similarities with that during perception. However, neuroimaging has not yet been used to characterise the content of complex mental images in any detail. Which stimulus features are being represented? Which aspects of the representation are shared between mental imagery and perception? Here we combine pairwise pattern-analysis of a rich stimulus set, with novel, adaptive stimulus selection, to provide precise, simultaneous characterisation of multiple facets of particular mental images, as represented by human occipitotemporal cortex. Across a large set of 200 naturalistic images, we identified those that when viewed evoked the most similar activation pattern to a predefined referent image. These stimuli we term the “neural neighbourhood,” and by examining which features they share, the neural representation of the referent was characterized along various feature dimensions. In two critical manipulations, we varied the identity of the referent image, and whether it was freely viewed or merely imagined. We find coding of precise semantic category and of emotional content, abstracted from low-level sensory properties. The balance of tuning along the various feature dimensions is dependent on the particular referent, and whether it was being viewed or imagined.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.