August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Visual target detection in temporal white-noise: A "universal" forward model using oscillatory impulse response functions
Author Affiliations
  • Sasskia Braers
    Universite Paul Sabatier, Toulouse, France
  • Rufin VanRullen
    Universite Paul Sabatier, Toulouse, France
Journal of Vision September 2016, Vol.16, 1222. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sasskia Braers, Rufin VanRullen; Visual target detection in temporal white-noise: A "universal" forward model using oscillatory impulse response functions. Journal of Vision 2016;16(12):1222. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Brain activity is inherently rhythmic. EEG responses to white-noise luminance sequences can serve to derive (by cross-correlation) the visual "impulse response function" (IRF), which displays large oscillatory components. The IRF can then be used mathematically (by convolution) to estimate EEG oscillatory responses to new white-noise sequences, without actually recording EEG. In turn, visual perception (e.g. for a brief target) is related to moment-by-moment fluctuations in both the phase and amplitude of brain rhythms. It logically follows that the detection of a target embedded in a white-noise sequence must be related to certain IRF features. Some of these features are subject-specific, and can serve to design noise sequences optimized for target detection by one specific observer (a form of neuro-encryption: BrĂ¼ers & VanRullen, VSS 2015). Other features, however, are subject-independent, reflecting "universal" properties of perception and oscillations; these are the properties studied here. We derived a "universal IRF" by averaging EEG IRFs from 20 observers. We then created a "universal forward model" taking as input a target's position within a white-noise luminance sequence, modeling oscillatory brain responses to that random sequence (by convolution with the IRF), and using specific features (phase, amplitude) of these modeled oscillations to output a prediction regarding the target's visibility. The prediction was then tested on a separate group of observers. No systematic differences in white-noise sequences could explain why some targets were more visible than others (as verified e.g. using "classification images"). Yet by considering the typical oscillatory brain responses that this noise was expected to produce (without actually recording them), we could guess which targets would be detected. Oscillatory phase (and to a lesser extent, amplitude) in several frequency bands robustly predicted perception, with a peak in the theta-band (4-8Hz, ~10% modulation, p< 0.001). We are now exploring ways to optimize predictions by combining oscillatory features.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.