September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Visual statistical learning faces interference from response and executive demands
Author Affiliations
  • Su Hyoun Park
    University of Delaware
  • Marian Berryhill
    University of Nevada, Reno
  • Jayesh Gupta
    University of Delaware
  • Timothy Vickery
    University of Delaware
Journal of Vision August 2017, Vol.17, 959. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Su Hyoun Park, Marian Berryhill, Jayesh Gupta, Timothy Vickery; Visual statistical learning faces interference from response and executive demands. Journal of Vision 2017;17(10):959. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Associative learning of predictive relationships among visual stimuli is often referred to as "visual statistical learning" (VSL). VSL occurs in the absence of explicit awareness of such contingencies, and thus is often cast as reflecting a continuously occurring process. We found evidence that VSL faces interference from minor variations in response and executive demands, challenging the notion that VSL is a low-level, perceptual phenomenon. In Experiment 1, participants monitored a stream of face and scene images (male/female and indoor/outdoor) and responded whenever an image flickered. 16 AB pairs were repeated throughout the stream, such that A was 100% predictive of B. To examine the role of categorical boundaries in VSL, pairs were formed such that the items shared subcategorical status (e.g., male→male), shared categorical but not subcategorical status (indoor→outdoor), or crossed category boundaries (female→outdoor). In a surprise recognition phase, subjects were forced to pick the more familiar pairing (foil vs. target pair). Participants were above-chance and equally proficient across the different types of pairings, suggesting that visual categorical differences played little role in learning. Experiment 2 was identical, except that participants performed a categorization task during training, pressing one button if the image was a female face or an indoor scene, and a different button if it was a male face or an outdoor scene. Participants were above-chance at recognizing pairs overall, but significantly less likely to recognize pairs if they involved different responses, even if they shared categories (e.g., male→female), and also less likely to recognize pairs that involved changing task rules (e.g., male→outdoor). These results suggest that VSL is subject to interference from high-level demands, such as the requirement to make different responses across sequential trials. These findings are consistent with the interpretation that VSL, as indexed by recognition, shares mechanisms with general associative learning.

Meeting abstract presented at VSS 2017


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.