September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Visual word classification and image reconstruction from EEG-based time-domain and frequency-domain features
Author Affiliations
  • Shouyu Ling
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
  • Andy Lee
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, CanadaRotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
  • Blair Armstrong
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, CanadaBCBL. Basque Center on Cognition, Brain, and Language
  • Adrian Nestor
    Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
Journal of Vision September 2018, Vol.18, 1162. doi:10.1167/18.10.1162
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shouyu Ling, Andy Lee, Blair Armstrong, Adrian Nestor; Visual word classification and image reconstruction from EEG-based time-domain and frequency-domain features. Journal of Vision 2018;18(10):1162. doi: 10.1167/18.10.1162.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

An increasing body of work has established the ability of neuroimaging data to support image reconstruction for single characters (e.g., letters or digits). Here, we classify and, then, reconstruct the appearance of whole words from electroencephalography (EEG) data with the aid of time-domain and frequency-domain features. To this aim, we recorded EEG signals from 14 right-handed adult participants while they viewed images of high-frequency concrete nouns with a consonant-vowel-consonant structure. Specifically, participants performed a one-back image task while viewing 80 unique words (repeated 96 times across 32 blocks). Time-domain features were provided by signal amplitudes up to 900ms after stimulus onset for 64 channels. Further, we extracted a large collection of frequency-domain features including the magnitude-squared coherence, the cross power spectral density phase and the continuous wavelet pseudopower for multiple frequency bands. Such features were then used for the purpose of pairwise word classification, word space estimation, visual feature synthesis and image reconstruction. Importantly, we found that EEG-based classification and reconstruction accuracies were well above chance. However, time-domain features outperformed systematically their frequency-domain counterparts. More specifically, we found that: (i) the most diagnostic features in the time domain were concentrated around the N170 ERP component at bilateral occipitotemporal (OT) channels; (ii) in the frequency domain, the most valuable information came from sums of continuous wavelet pseudopower in the mid and high beta bands across OT electrodes, consistent with the implication of these bands in visual perception and attention, and (iii) complementary information was widely distributed and could be used to further boost performance (e.g., signal amplitudes around N250 and theta band features). Thus, our results highlight the diverse sources of information associated with word processing as reflected by the EEG signal. More generally, they demonstrate the feasibility of whole-word image reconstruction from neuroimaging data.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×