August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Uncovering neural-based visual-orthographic representations from mental imagery
Author Affiliations
  • Shouyu Ling
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
    Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, US
  • Lorna García Pentón
    MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, UK
  • Blair C. Armstrong
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
    MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, UK
  • Andy C.H. Lee
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
    BCBL. Basque Center on Cognition, Brain, and Language, San Sebastián, Spain
  • Adrian Nestor
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
Journal of Vision August 2023, Vol.23, 5118. doi:https://doi.org/10.1167/jov.23.9.5118
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shouyu Ling, Lorna García Pentón, Blair C. Armstrong, Andy C.H. Lee, Adrian Nestor; Uncovering neural-based visual-orthographic representations from mental imagery. Journal of Vision 2023;23(9):5118. https://doi.org/10.1167/jov.23.9.5118.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Clarifying the neural and representational basis of mental imagery has elicited significant interest in the study of visual recognition. Recently, numerous attempts have been directed at uncovering the structure and the content of visual imagery. However, these attempts have mostly targeted simple visual features (e.g., orientations, shapes, or single letters), limiting the theoretical and practical implications of this research. To address these limitations, the current study aimed to decode and to reconstruct the appearance of single words from mental imagery with the aid of functional magnetic resonance imaging (fMRI). We collected fMRI data from 13 healthy right-handed adults while they passively viewed or mentally imagined the appearance of three-letter concrete nouns with a consonant-vowel-consonant structure. Consistent with previous findings, multivariate analyses demonstrated that pairs of words can be discriminated from neural patterns when words are viewed and, also, when they are imagined. However, decoding relied more extensively on early visual areas in the former case, for perception, and more extensively on higher-level visual areas, such as the visual word form area (vWFA), in the latter case, for imagery. To assess and to visualize the representational content underlying successful decoding, imagery-based image reconstruction was conducted by mapping the neural patterns of visual words during imagery onto a representational feature space extracted from neural signals during perception. This analysis revealed successful levels of imagery-based image reconstruction for single words in the early visual cortex as well as in the vWFA. Thus, our findings speak to overlapping neural representations between imagery and perception, both in low-level visual areas and higher-order visual cortex. Further, they shed light on the fine-grained neural representations of visual-orthographic information during mental imagery.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×