Abstract
The evoked potentials of the human visual system are known to carry information regarding the images that produce them. However, the relationship between image statistics and macro-scale neuronal responses remains unclear. Here, we approach the problem by mapping the state-space geometry of evoked potentials with images drawn from different locations within a natural scene state-space. We also mapped where the evoked responses to different scenes fall within neural state-space, and assessed how much of the variance defining that space could be explained by particular image statistics. Data were gathered in a steady-state visual evoked potential paradigm whereby participants (n = 18) viewed 700 grayscale visual scenes while undergoing 128-channel EEG. Scene images were contrast modulated at a sinusoidal flicker rate of 5 Hz for 6000 msec while participants engaged in a distractor task at fixation. Electrode data with the highest signal-to-noise ratio were submitted to a principal component (PC) analysis on a participant-by-participant basis. The first three PCs were found to account for a median of 90% of the response variance. Interestingly, the distribution of responses to different scenes within that space is highly non-Gaussian, with the first PC defining that space showing remarkable stability across participants (Cronbach's alpha = 0.93). Further, stimuli in image state-space were mapped to their response location in neural state-space with minimal error using linear transformation matrices. Lastly, a median of 37.7%, 14.6%, and 11.1% of the variance along the first three PCs (respectively) is explained by standard image statistics (amplitude spectrum slope, band-limited contrast, orientation bias, phase-only second spectrum slope, structural sparseness, and whitened skewness and kurtosis), with phase-only second spectrum slope accounting for most of the unique variance. Together, the results demonstrate that this approach has much promise for understanding how the brain maps our visual world onto neural representations.
Meeting abstract presented at VSS 2018