Purchase this article with an account.
Shouyu Ling, Andy Lee, Blair Armstrong, Adrian Nestor; Visual word classification and image reconstruction from EEG-based time-domain and frequency-domain features. Journal of Vision 2018;18(10):1162. doi: 10.1167/18.10.1162.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
An increasing body of work has established the ability of neuroimaging data to support image reconstruction for single characters (e.g., letters or digits). Here, we classify and, then, reconstruct the appearance of whole words from electroencephalography (EEG) data with the aid of time-domain and frequency-domain features. To this aim, we recorded EEG signals from 14 right-handed adult participants while they viewed images of high-frequency concrete nouns with a consonant-vowel-consonant structure. Specifically, participants performed a one-back image task while viewing 80 unique words (repeated 96 times across 32 blocks). Time-domain features were provided by signal amplitudes up to 900ms after stimulus onset for 64 channels. Further, we extracted a large collection of frequency-domain features including the magnitude-squared coherence, the cross power spectral density phase and the continuous wavelet pseudopower for multiple frequency bands. Such features were then used for the purpose of pairwise word classification, word space estimation, visual feature synthesis and image reconstruction. Importantly, we found that EEG-based classification and reconstruction accuracies were well above chance. However, time-domain features outperformed systematically their frequency-domain counterparts. More specifically, we found that: (i) the most diagnostic features in the time domain were concentrated around the N170 ERP component at bilateral occipitotemporal (OT) channels; (ii) in the frequency domain, the most valuable information came from sums of continuous wavelet pseudopower in the mid and high beta bands across OT electrodes, consistent with the implication of these bands in visual perception and attention, and (iii) complementary information was widely distributed and could be used to further boost performance (e.g., signal amplitudes around N250 and theta band features). Thus, our results highlight the diverse sources of information associated with word processing as reflected by the EEG signal. More generally, they demonstrate the feasibility of whole-word image reconstruction from neuroimaging data.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only