Purchase this article with an account.
Nicholas B. Turk-Browne, Brian J. Scholl; The space-time continuum: Spatial visual statistical learning produces temporal processing advantages. Journal of Vision 2006;6(6):676. doi: 10.1167/6.6.676.
Download citation file:
© 2015 Association for Research in Vision and Ophthalmology.
A central task of vision is to parse undifferentiated input into discrete objects and groups. Visual statistical learning (VSL) may provide an implicit mechanism for such segmentation via the extraction of covariance between features, parts, and groups. However, because the stimuli in previous VSL studies were identical during training and test, it is unclear what is really being learned: the resulting representations could incorporate all visual details of the learning context, or could be more abstract. We have been exploring such issues using ‘transfer’ designs in which the stimuli differ between training and test. Persistent VSL across changes along a given dimension indicates that such information is not intrinsic to the resulting representation. Here, we report one of the most extreme possible cases of transfer: from space to time. Observers viewed a seven minute sequence of spatial grids, each containing several adjacent shapes. Unbeknownst to observers, shapes were arranged in fixed spatial configurations that could only be segmented into discrete pairs on the basis of covariance. In a subsequent test, observers performed a target detection task on rapid sequences of shapes presented centrally, one at a time. Targets preceded by their spatial mates were detected more quickly than targets preceded by shapes with which they were not paired during training. Because there was no temporal information during training, and no spatial information during test, these results provide striking evidence for a purely associative component of VSL, and also highlight the incredible flexibility of such learning.
This PDF is available to Subscribers Only