August 2014
Volume 14, Issue 10
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Predicting moment-to-moment attentional state
Author Affiliations
  • Monica D. Rosenberg
    Department of Psychology, Yale University
  • Emily S. Finn
    Interdepartmental Neuroscience Program, Yale University
  • R. Todd Constable
    Interdepartmental Neuroscience Program, Yale University
  • Marvin M. Chun
    Department of Psychology, Yale University
Journal of Vision August 2014, Vol.14, 634. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Monica D. Rosenberg, Emily S. Finn, R. Todd Constable, Marvin M. Chun; Predicting moment-to-moment attentional state. Journal of Vision 2014;14(10):634.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Although fluctuations in sustained attention are ubiquitous, most psychological experiments treat them as noise, averaging performance over many trials. It would be useful, however, to track and predict trial-to-trial attentional state. The current study does so using multivoxel pattern analysis (MVPA) of fMRI data during n-back tasks of varying load. Stimuli were face images centrally overlaid on scenes; participants were instructed to attend to faces and ignore scenes. Tasks consisted of a baseline 1-back task in which participants responded to every face different than the previous (~90%) and withheld response to repeated faces (~10%); a perceptual load task (1-back with degraded faces); and a working memory load task (2-back). At each correct response, reaction time (RT) variability, calculated as the normalized absolute deviance of that trial's RT from the mean RT of the task, was used an index of attentional state. In each task, participants' 50% most variable trials were labeled "out of the zone" and 50% least variable trials were labeled "in the zone." RT variability has previously been used to track attentional fluctuations (e.g., Esterman et al., 2013), and in the current study predicts performance such that less variable subjects showed higher d'. Linear support vector machine classifiers were trained on voxelwise neural activity to predict each participant's attentional state (in/out of the zone) in each task using a 90-fold cross-validation procedure. Classifiers trained in regions of the default mode and dorsal attention networks, implicated in attentional performance, predicted trial-to-trial attentional state with above-chance accuracy in all three tasks. Classifiers trained in the fusiform face area were successful in the perceptual and working memory load tasks only, while classifiers trained in the parahippocampal place area were only successful in the baseline task. These results suggest that MVPA can be used to predict attentional state on a trial-to-trial basis.

Meeting abstract presented at VSS 2014


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.