Journal of Vision Cover Image for Volume 19, Issue 10
September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
EEG-based decoding of visual words from perception and imagery
Author Affiliations & Notes
  • Shouyu Ling
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
  • Andy C.H. Lee
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
    Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
  • Blair C. Armstrong
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
    BCBL. Basque Center on Cognition, Brain, and Language
  • Adrian Nestor
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
Journal of Vision September 2019, Vol.19, 33. doi:https://doi.org/10.1167/19.10.33
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shouyu Ling, Andy C.H. Lee, Blair C. Armstrong, Adrian Nestor; EEG-based decoding of visual words from perception and imagery. Journal of Vision 2019;19(10):33. https://doi.org/10.1167/19.10.33.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Investigations into the neural basis of reading have made considerable progress in elucidating the cortical locus of orthographic representations. However, much less is known about “what” and “when” specific properties of a word are represented. Furthermore, the relationship between perception and imagery for visual words remains to be elucidated. Here, we capitalize on the structure of electroencephalography (EEG) data to examine the neural signature of word processing. Specifically, we investigated whether EEG patterns can serve for decoding visual words from perception and imagery in neurotypical adults. To this end, we collected data corresponding to 80 four-letter high-frequency nouns during a one-back repetition detection task and 8 such nouns during a mental imagery task. Then, EEG pattern analyses were conducted across time and frequency-domain features to classify the identity of the viewed/imagined words. Our results show that classification accuracy was above chance across participants both for perception and imagery. However, perception and imagery-based decoding relied on different information. Specifically, the former relied on spatiotemporal information in the proximity of the N170 ERP component recorded at occipito-temporal (OT) electrodes and on lower-frequency bands (i.e., theta and alpha). In contrast, the latter exhibited marked variability across participants over time and sites while relying predominantly on higher-frequency bands (i.e., beta and gamma). Further, EEG-based estimates of word confusability were well explained by visual-orthographic measures of word similarity, especially for perception. Thus, our results document the ability of EEG signals to support decoding of orthographic information. Moreover, they shed light on differences across the neural mechanisms underlying perception and imagery as well as on the visual-orthographic nature of neural word representations. More generally, the current findings provide a new window into word recognition in terms of underlying features, spatiotemporal dynamics, and neurocomputational principles.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×