October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Temporal hierarchies in visual statistical learning: Behavioral, neuroimaging, and neural network modeling investigations
Author Affiliations
  • Cybelle Smith
    University of Pennsylvania
  • Anna Schapiro
    University of Pennsylvania
  • Sharon Thompson-Schill
    University of Pennsylvania
Journal of Vision October 2020, Vol.20, 1393. doi:https://doi.org/10.1167/jov.20.11.1393
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Cybelle Smith, Anna Schapiro, Sharon Thompson-Schill; Temporal hierarchies in visual statistical learning: Behavioral, neuroimaging, and neural network modeling investigations. Journal of Vision 2020;20(11):1393. https://doi.org/10.1167/jov.20.11.1393.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How does the brain encode visual context at different temporal scales? When processing familiar sensory and semantic input, cortex is sensitive to input further into the past along a posterior to anterior gradient (Hasson et al. 2015). To investigate how we learn new hierarchical temporal structure in the visual domain, we designed a novel paradigm employing statistical learning that can be used to map neural contributions to contextual representation at different time scales. Over four behavioral experiments (N=72), we demonstrate that humans are sensitive to transition points among both low- and high-level sequential units during exposure to sequences of abstract images (fractals). However, results may be attributable to low-level learning of image trigrams. Thus, we altered the paradigm to more effectively disentangle learning of nested order information at slow and fast temporal scales. One of eight context cue images is presented multiple times, and embedded in this stream are paired associate images. Critically, pairwise contingencies depend on both the identity of the context cue (fast temporal scale) as well as the time since the previous context shift (slow temporal scale). We have found that multi-layer recurrent neural networks trained to predict the upcoming image in this paradigm encode order information at shorter time scales at lower levels (closer to perceptual input). However, only neural architectures that can remember further into the past (i.e. those using long-short term memory units, rather than simple recurrent units) can learn the slow temporal structure. Planned neuroimaging work will test the idea that brain regions similarly spatially segregate these timescales. In particular, we anticipate that the hippocampus will represent these hierarchical timescales on an anterior-posterior gradient and that prefrontal cortical regions will be engaged along a lateral-medial gradient.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×