June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
The space-time continuum: Spatial visual statistical learning produces temporal processing advantages
Author Affiliations
  • Nicholas B. Turk-Browne
    Yale University
  • Brian J. Scholl
    Yale University
Journal of Vision June 2006, Vol.6, 676. doi:https://doi.org/10.1167/6.6.676
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicholas B. Turk-Browne, Brian J. Scholl; The space-time continuum: Spatial visual statistical learning produces temporal processing advantages. Journal of Vision 2006;6(6):676. https://doi.org/10.1167/6.6.676.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A central task of vision is to parse undifferentiated input into discrete objects and groups. Visual statistical learning (VSL) may provide an implicit mechanism for such segmentation via the extraction of covariance between features, parts, and groups. However, because the stimuli in previous VSL studies were identical during training and test, it is unclear what is really being learned: the resulting representations could incorporate all visual details of the learning context, or could be more abstract. We have been exploring such issues using ‘transfer’ designs in which the stimuli differ between training and test. Persistent VSL across changes along a given dimension indicates that such information is not intrinsic to the resulting representation. Here, we report one of the most extreme possible cases of transfer: from space to time. Observers viewed a seven minute sequence of spatial grids, each containing several adjacent shapes. Unbeknownst to observers, shapes were arranged in fixed spatial configurations that could only be segmented into discrete pairs on the basis of covariance. In a subsequent test, observers performed a target detection task on rapid sequences of shapes presented centrally, one at a time. Targets preceded by their spatial mates were detected more quickly than targets preceded by shapes with which they were not paired during training. Because there was no temporal information during training, and no spatial information during test, these results provide striking evidence for a purely associative component of VSL, and also highlight the incredible flexibility of such learning.

Turk-Browne, N. B. Scholl, B. J. (2006). The space-time continuum: Spatial visual statistical learning produces temporal processing advantages [Abstract]. Journal of Vision, 6(6):676, 676a, http://journalofvision.org/6/6/676/, doi:10.1167/6.6.676. [CrossRef]
Footnotes
 (BJS was supported by NSF #0132444. NTB was supported by an NSERC PGS-D award.)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×