Purchase this article with an account.
Sasskia Braers, Rufin VanRullen; Visual target detection in temporal white-noise: A "universal" forward model using oscillatory impulse response functions. Journal of Vision 2016;16(12):1222. doi: 10.1167/16.12.1222.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
Brain activity is inherently rhythmic. EEG responses to white-noise luminance sequences can serve to derive (by cross-correlation) the visual "impulse response function" (IRF), which displays large oscillatory components. The IRF can then be used mathematically (by convolution) to estimate EEG oscillatory responses to new white-noise sequences, without actually recording EEG. In turn, visual perception (e.g. for a brief target) is related to moment-by-moment fluctuations in both the phase and amplitude of brain rhythms. It logically follows that the detection of a target embedded in a white-noise sequence must be related to certain IRF features. Some of these features are subject-specific, and can serve to design noise sequences optimized for target detection by one specific observer (a form of neuro-encryption: Brüers & VanRullen, VSS 2015). Other features, however, are subject-independent, reflecting "universal" properties of perception and oscillations; these are the properties studied here. We derived a "universal IRF" by averaging EEG IRFs from 20 observers. We then created a "universal forward model" taking as input a target's position within a white-noise luminance sequence, modeling oscillatory brain responses to that random sequence (by convolution with the IRF), and using specific features (phase, amplitude) of these modeled oscillations to output a prediction regarding the target's visibility. The prediction was then tested on a separate group of observers. No systematic differences in white-noise sequences could explain why some targets were more visible than others (as verified e.g. using "classification images"). Yet by considering the typical oscillatory brain responses that this noise was expected to produce (without actually recording them), we could guess which targets would be detected. Oscillatory phase (and to a lesser extent, amplitude) in several frequency bands robustly predicted perception, with a peak in the theta-band (4-8Hz, ~10% modulation, p< 0.001). We are now exploring ways to optimize predictions by combining oscillatory features.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only