Abstract
Elucidating the dynamics of visual processing delineates a major area of inquiry in the study of face recognition. Yet, the fine-grained pictorial content of face representations and its temporal profile still remain to be clarified. To this end, here we use image reconstruction as applied to electroencephalography (EEG) data. Specifically, we record EEG data associated with viewing face stimuli in healthy participants; then, we exploit spatiotemporal information available in EEG patterns to reconstruct the visual appearance of corresponding stimuli. Importantly, the structure of visual representations is derived directly from one set of EEG data (e.g., elicited by viewing happy faces); then, the corresponding features are used for reconstruction purposes from a different EEG dataset (e.g., elicited by neutral faces). Analyses based on sequential 10ms windows indicate that multiple intervals support successful feature derivation and image reconstruction. In particular, we note that reconstruction accuracy peaks in the proximity of the N170 component, as evinced by univariate analyses of the same data. Further, reconstruction based on aggregate data from a larger temporal window (50-650ms) shows a clear boost in accuracy over their smaller-window counterparts, consistent with the hypothesis that distinct visual information becomes available over time. Thus, theoretically, the current work sheds light on the time course of facial information processing; methodologically, it provides a first demonstration regarding the ability of image reconstruction to accommodate EEG data; and, last, from a translational standpoint, this work points to the possibility of affordable means for neural-based communication of visual information via EEG systems.
Meeting abstract presented at VSS 2017