September 2005
Volume 5, Issue 8
Vision Sciences Society Annual Meeting Abstract  |   September 2005
Visual statistical learning through intervening noise
Author Affiliations
  • Justin A. Junge
    Yale University
  • Nicholas B. Turk-Browne
    Yale University
  • Brian J. Scholl
    Yale University
Journal of Vision September 2005, Vol.5, 421. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Justin A. Junge, Nicholas B. Turk-Browne, Brian J. Scholl; Visual statistical learning through intervening noise. Journal of Vision 2005;5(8):421.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

A primary goal of visual processing is to extract statistical regularities from the environment in both space and time, and recent research on visual statistical learning (VSL) has demonstrated that this extraction can occur rapidly for even subtle correlations in homogenous streams of stimuli. In the real world, however, most regularities do not exist in isolation, but rather are embedded in noisy and heterogeneous input streams. To explore VSL in such contexts, we measured subjects' ability to extract statistical regularities in time through intervening distractors, in a stream of shapes appearing one at a time. Novel shapes were randomly assigned to one of two color groups and within each group they were clustered into temporal ‘triplets’ — three shapes that always appeared in the same order. Shapes from both color groups were then randomly interleaved, maintaining triplet order (e.g. triplets abc in red, and XYZ in green, presented in stream aXbcYZ). Subjects were instructed to perform a repetition detection task for shapes in just one color for 20 min, and were then given an unexpected forced-choice recognition task (without color cues) pitting triplets against random sequences of 3 shapes (from that same color group). This test revealed robust VSL for triplets despite the pervasive interruption by shapes from the other color. This VSL was replicated even with more tightly constrained interleaving, such that no triplet ever occurred without at least one interruption. Additional experiments report (1) whether ‘interrupted’ VSL of triplets can occur even in the absence of any uninterrupted pairs (aXbYcZ), and (2) whether interrupted VSL occurs even when there are no extrinsic cues (such as color) to distinguish the relevant and irrelevant items. Overall, these demonstrations of VSL through intervening noise suggest that statistical learning may ‘scale up’ to more real-world contexts wherein we encounter a constantly shifting array of objects, only some of which are related.

Junge, J. A. Turk-Browne, N. B. Scholl, B. J. (2005). Visual statistical learning through intervening noise [Abstract]. Journal of Vision, 5(8):421, 421a,, doi:10.1167/5.8.421. [CrossRef]
 Supported by NSF #BCS-0132444.

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.