December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Reconstructing attended and unattended colors from human scalp electroencephalography
Author Affiliations
  • Angus Chapman
    University of California San Diego
  • Viola Störmer
    Dartmouth College
Journal of Vision December 2022, Vol.22, 4303. doi:https://doi.org/10.1167/jov.22.14.4303
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Angus Chapman, Viola Störmer; Reconstructing attended and unattended colors from human scalp electroencephalography. Journal of Vision 2022;22(14):4303. https://doi.org/10.1167/jov.22.14.4303.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Human electroencephalography (EEG) studies have shown that attention enhances relevant over irrelevant visual features (e.g., Andersen et al., 2013). Such amplitude modulations have been a major focus of feature-based attention research, alongside recent studies showing that the spatial pattern of scalp-recorded EEG activity conveys critical information about different stimuli. To date, the majority of these studies have focused on decoding visual-spatial features (location, orientation). Here, we test whether color—a non-spatial feature—can be reconstructed from the scalp activity pattern using steady-state visual evoked potentials (SSVEPs) and inverted encoding modeling (IEM). Furthermore, we test how these color reconstructions vary across different types of feature-based attention tasks. Participants (N=17) performed two color-based attention tasks: 1) a non-competitive task, in which they monitored a single array of colored dots to detect a brief interval of coherent motion; 2) a selective attention task in which participants focused on target-colored dots among distractors to detect a brief luminance decrease. In the selective task, the target and distractor colors were either distinct (180° away on a CIELab colorwheel) or similar (63° away). Stimuli flickered at different frequencies to elicit separable SSVEPs that we used to generate color-selective response profiles based on their spatially distributed patterns. Across both tasks, we found that information about the target and distractor colors was reliably recovered from single trials with IEMs, confirming the usefulness of this technique for investigating feature representations. Model-based reconstructions in the selective attention task were stronger for more distinct colors, suggesting that perceptually similar features might elicit less informative SSVEP signals due to greater overlap in the neural populations that encode them. Broadly, our results demonstrate that SSVEPs along with IEMs can be used to investigate non-spatial visual features that may not be as well represented as spatial information in event-related potentials or alpha-band signals.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×