September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
How abstract are the representations derived from visual statistical learning?
Author Affiliations
  • Su Hyoun Park
    Department of Psychological and Brain Sciences, University of Delaware
  • Leeland Rogers
    Department of Psychological and Brain Sciences, University of Delaware
  • Timothy Vickery
    Department of Psychological and Brain Sciences, University of Delaware
Journal of Vision September 2018, Vol.18, 1310. doi:10.1167/18.10.1310
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Su Hyoun Park, Leeland Rogers, Timothy Vickery; How abstract are the representations derived from visual statistical learning?. Journal of Vision 2018;18(10):1310. doi: 10.1167/18.10.1310.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Learners can extract regularities in an environment, even without explicit cues to structure and in the absence of instruction—this has been termed "statistical learning." Previous studies of statistical learning have mainly focused on the types of statistical relationships that are learned in various contexts, but less work has probed the nature of the resulting representations. In three experiments, we found evidence that visual statistical learning (VSL) can result in flexible and abstract representations. In all experiments, participants were asked to passively view a sequence of novel shapes that always appeared as part of triplet (e.g., ABC) sequence. In a subsequent phase, participants completed a forced-choice recognition task, choosing between exposed triplets (e.g. ABC) or respective foil triplets (e.g., AEI) and embedded pairs (e.g. AB and BC and AC) or foil pairs (e.g., AE). We compared recognition rates of non-adjacent items (i.e., AC) and completely randomized order of the target triplets or pairs (e.g., ACB, BAC, and CAB) to chance and to adjacent items and correctly-ordered shape sequences. The accuracy rate of all target triplets and pairs were significantly higher than chance level (0.5), and we noted that learning occurred for non-adjacent items too. There were no differences between triplet, AB, and BC pairs in their accuracy rate, but the accuracy rate for AC pairs are significantly lower than those of three other conditions. In addition, the accuracy rate of all possible orders of target triplets and pairs were significantly higher than chance, and there were no differences between canonical orderings and their corresponding randomized orderings. Our work demonstrates a robust and replicable learning of remote pairs of items and even showed that VSL appeared to support learning that abstracted over initially presented orderings, which support the proposition of flexible and abstract representations during VSL.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×