September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Did you see that? Examining whether statistical learning can elicit category-specific EEG activity in the absence of visual stimuli
Author Affiliations
  • Joshua Zosky
    Department of Psychology, University of Nebraska - Lincoln
  • Matthew Johnson
    Department of Psychology, University of Nebraska - Lincoln
  • Michael Dodd
    Department of Psychology, University of Nebraska - Lincoln
Journal of Vision August 2017, Vol.17, 1073. doi:https://doi.org/10.1167/17.10.1073
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joshua Zosky, Matthew Johnson, Michael Dodd; Did you see that? Examining whether statistical learning can elicit category-specific EEG activity in the absence of visual stimuli. Journal of Vision 2017;17(10):1073. https://doi.org/10.1167/17.10.1073.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It is well-established that the visual system can extract statistical regularities in transitional probabilities between objects in a serial stream, making statistical learning paradigms well-suited to studying the neural mechanisms of learning. For example, previous fMRI studies have revealed that anticipating a stimulus that is predicted by statistical regularities in a sequence can produce item-specific activity even when the anticipated stimulus itself is not shown (Kok, Failing, & de Lange, 2014; Puri, Wojciulik, & Ranganath, 2009). However, little is known about the time course of these anticipatory effects at temporal resolutions finer than that afforded by fMRI. Thus, the present study used EEG with a statistical learning paradigm in which auditory syllables were paired with either faces or objects in an initial acquisition phase, such that certain syllables were always paired with specific faces, certain syllables were always paired with specific objects, and a subset of syllables were not assigned to any picture. Following acquisition, participants took part in a second session that was identical to acquisition except that 25% of (anticipated) face and object pictures were randomly omitted. The critical questions are whether anticipated face/object pictures still produce category-specific EEG activity (e.g., the face-specific N170 ERP) when they are unexpectedly omitted, and at what point in the EEG time course such category-specific activity emerges. Critically, in the absence of anticipated stimuli, the wave form elicited in response to the paired associate closely resembles the expected wave form when the anticipated stimulus is present. The results shed new light on the neural mechanisms underlying visual statistical learning and provide insight regarding the extent to which category-specific anticipatory activity is intentional versus automatic.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×