Abstract
Voxelwise encoding models of blood oxygen level-dependent (BOLD) signals offer insight into how information at different visual field locations is simultaneously represented in visual cortex. Here, we sought to extend this modeling approach to visual evoked potentials (VEPs) measured at different scalp locations by capitalizing on the principles of the cruciform model (Jeffreys and Axord, 1972a,b). However, using raw VEPs to simultaneously map the visual field to the scalp topography of EEG electrodes would result in overlapping components that differ in polarity as a function of visual field location. What that means is that a complete simultaneous topographic mapping of the visual field would be largely obscured by dipole cancellation. To circumvent this problem, we mapped the localized outputs of a log-Gabor filter encoding model to different VEPs within a geometric state-space framework. Specifically, we measured the correspondence between the state-space geometry produced by our encoding model at every location within large-field visual scenes and the state-space geometry of VEPs measured at each electrode on the posterior scalp. Data were gathered in a standard VEP paradigm whereby participants (n = 23) viewed 150 grayscale scenes (18.5 degrees of visual angle) while undergoing 128-channel EEG. The encoding model state-space produced at each location of the visual field was then regressed against the neural state-space produced at each time point for each electrode. The results show that each posterior electrode can be simultaneously mapped to unique regions of the visual field, with a complete map of the entire visual field represented across all posterior electrodes starting at 75msec post-stimulus onset. The success of this state-space mapping approach suggests that it is possible to use evoked potentials to assess the temporal encoding of visual information at different locations within the visual field, thereby providing insight into visual feature usage over space and time.