August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Real-time decoding and training of attention
Author Affiliations
  • Megan T. deBettencourt
    Princeton Neuroscience Institute, Princeton University
  • Ray F. Lee
    Princeton Neuroscience Institute, Princeton University
  • Jonathan D. Cohen
    Princeton Neuroscience Institute, Princeton University\nDepartment of Psychology, Princeton University
  • Kenneth A. Norman
    Princeton Neuroscience Institute, Princeton University\nDepartment of Psychology, Princeton University
  • Nicholas B. Turk-Browne
    Princeton Neuroscience Institute, Princeton University\nDepartment of Psychology, Princeton University
Journal of Vision August 2012, Vol.12, 377. doi:https://doi.org/10.1167/12.9.377
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Megan T. deBettencourt, Ray F. Lee, Jonathan D. Cohen, Kenneth A. Norman, Nicholas B. Turk-Browne; Real-time decoding and training of attention. Journal of Vision 2012;12(9):377. https://doi.org/10.1167/12.9.377.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Selective attention is needed to prioritize the subset of sensory information that is most relevant to our goals. Unfortunately, selective attention is prone to lapses, even in situations where sustaining focused attention is crucial (e.g., when driving in traffic). We propose that such lapses occur partly because we lack a subjective sense of when we are or are not attending well, and that with an appropriate feedback signal, attention can be trained and enhanced. We report initial steps in the development of a closed-loop real-time fMRI system where we use multivariate pattern analysis to provide neurofeedback and train attention. During an fMRI session, observers were presented with a continuous stream of composite face/scene images; occasional shift cues were presented indicating which category should be attended and responded to. Data were pulled from the scanner and preprocessed in real-time (with motion correction, masking, smoothing, and temporal filtering). Whole-brain data obtained under each category cue were used to train a classifier to predict observers’ attentional state. Despite the fact that stimuli were identical in the attend-face and attend-scene conditions, we obtained highly reliable classification performance in real-time for individual trials. This successful real-time decoding allows us to provide immediate and time-varying feedback to observers regarding how well they are attending to the cued category (e.g., tinting the screen background more green or red to indicate increasingly correct or incorrect attentional focus, respectively). By staircasing behavioral performance to a fixed level of accuracy before providing feedback (adding phase-scrambled noise), we can also examine whether neurofeedback (vs. sham or no feedback) improves accuracy in the behavioral task. In sum, by applying multivariate pattern analysis in real-time to fMRI data, we can provide observers with a sophisticated and timely readout of attentional focus that may prove useful in training attention.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×