September 2005
Volume 5, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2005
Attention and automaticity in visual statistical learning
Author Affiliations
  • Nicholas B. Turk-Browne
    Yale University
  • Justin A. Junge
    Yale University
  • Brian J. Scholl
    Yale University
Journal of Vision September 2005, Vol.5, 1067. doi:https://doi.org/10.1167/5.8.1067
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicholas B. Turk-Browne, Justin A. Junge, Brian J. Scholl; Attention and automaticity in visual statistical learning. Journal of Vision 2005;5(8):1067. https://doi.org/10.1167/5.8.1067.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We typically think of vision as the recovery of increasingly rich information about individual objects, but there are also massive amounts of information about relations between objects in space and time. Recent studies of visual statistical learning (VSL) have suggested that this information is implicitly and automatically extracted by the visual system. Here we explore this possibility by evaluating the degree to which VSL of temporal regularities (Fiser & Aslin, 2002) is influenced by attention. Observers viewed a 6 min sequence of geometric shapes, appearing one at a time in the same location every 400 ms. Half of the shapes were red and half were green, with a separate pool of shapes for each color. The sequence of shapes was constructed by randomly intermixing a stream of red shapes with a stream of green shapes. Unbeknownst to observers, the color streams were constructed from sub-sequences (or ‘triplets’) of three shapes that always appeared in succession; these triplets comprised the temporal statistical regularities to be learned. Attention was manipulated by having subjects detect shape repetitions in one of the colors. In a surprise forced-choice familiarity test, triplets from both color streams (now in black) were pitted against foil triplets composed of shapes from the same color. If VSL is preattentive, then observers should be able to pick out the real triplets from both streams equally well. Surprisingly, however, they only learned the temporal regularities in the attended color stream. Further experiments that improved learning of the attended stream failed to elicit commensurate improvements for the unattended stream. We conclude that while VSL is certainly implicit (because it occurred during a secondary task), it is not a completely data-driven process since it appears to be gated by selective attention. The mechanics of VSL may thus be automatic, with top-down selective attention dictating the populations of stimuli over which VSL operates.

Turk-Browne, N. B. Junge, J. A. Scholl, B. J. (2005). Attention and automaticity in visual statistical learning [Abstract]. Journal of Vision, 5(8):1067, 1067a, http://journalofvision.org/5/8/1067/, doi:10.1167/5.8.1067. [CrossRef]
Footnotes
 Supported by NSF #BCS-0132444
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×