September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Arbitrary Groupings Modulate Visual Statistical Learning
Author Affiliations & Notes
  • Leeland L Rogers
    University of Delaware
  • Su Hyoun Park
    University of Delaware
  • Timothy J Vickery
    University of Delaware
Journal of Vision September 2019, Vol.19, 232. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Leeland L Rogers, Su Hyoun Park, Timothy J Vickery; Arbitrary Groupings Modulate Visual Statistical Learning. Journal of Vision 2019;19(10):232.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Visual statistical learning (VSL) occurs when stimuli consistently co-occur in space or time. Subsequent to such exposure, even in the absence of awareness, subjects recognize such associations. The roles of stimulus similarity, categories, and tasks in shaping VSL are poorly understood and understudied. In previous work from our lab (Vickery, Park, Gupta & Berryhill, 2018), we found that subjects learned same-category pairings better than different-category pairings during exposure to temporal VSL streams containing such contingencies. This only occurred when the task was to categorize stimuli. It was not clear, however, whether visual similarity or categorization played the predominant role in this effect. In the current work, participants saw a stream of either fractal images (Experiment 1) or face and scene images (Experiment 2). The stream of images was composed of AB pairs of stimuli, such that image A always preceded image B within the stream. Subjects were instructed to learn arbitrary group mappings, with half of images associated with one group (‘z’ key response) and the other half with the other (‘m’ key response). Half of the pairs were within-group while the other half were between-group. In Experiment 1, subjects showed much greater recognition for within-than between-group pairings (p < .01), despite the fact that similarity was equalized by random assignment of fractal image to pairing and group. Experiment 2 replicated this effect (p < .001). In addition, Experiment 2 pairs were also equally composed of same or different natural category (face or scene) images equally divided between same and different arbitrary groupings (‘z’ and ‘m’ categories). Natural categories still played a role (p < .001), with better recognition for within-category pairings (no interaction between grouping and category). Our results strongly suggest that both arbitrary groupings and natural categories play strong roles in determining the strength of VSL.

Acknowledgement: NSF OIA 1632849 NSF BCS 1558535 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.