September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Changing object representations during visual production training
Author Affiliations
  • Jeffrey Wammes
    Psychology, Yale University
  • Judith Fan
    Psychology, Stanford UniversityPsychology, Princeton University
  • Rachel Lee
    Neuroscience Institute, Princeton University
  • Jordan Gunn
    Neuroscience Institute, Princeton University
  • Daniel Yamins
    Psychology, Stanford University
  • Kenneth Norman
    Psychology, Princeton UniversityNeuroscience Institute, Princeton University
  • Nicholas Turk-Browne
    Psychology, Yale University
Journal of Vision September 2018, Vol.18, 763. doi:10.1167/18.10.763
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeffrey Wammes, Judith Fan, Rachel Lee, Jordan Gunn, Daniel Yamins, Kenneth Norman, Nicholas Turk-Browne; Changing object representations during visual production training. Journal of Vision 2018;18(10):763. doi: 10.1167/18.10.763.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Drawing is a powerful tool for encoding object structure. In prior work, we found that training people to repeatedly draw certain objects reduces feature overlap in their drawings and leads to improved categorical perception for these objects. Here, we used fMRI to test the hypothesis that such effects reflect competitive dynamics during visual production. Specifically, competition may lead to differentiation in the neural object representations, particularly in medial temporal lobe (MTL) regions that encode object memories. Participants were scanned during three experimental phases: pre-training, drawing training, and post-training. During the drawing training phase, participants alternated between drawing two related objects (e.g., table, bed) on an MRI-compatible tablet. During pre- and post-training phases, they repeatedly viewed these, and two other control objects (e.g., chair, bench). We predicted that repeated drawing of the two trained objects would elicit concurrent activation of their representations in MTL subregions, reflecting competition. We further predicted that this competition would result in subsequent differentiation of the trained object representations. To evaluate these predictions, we first fit a GLM to the pre-training phase to generate neural template representations for each object, containing the distributed pattern of parameter estimates over voxels within ROIs. We then fit an analogous GLM to the drawing phase, estimating activity at every timepoint in each drawing trial in each of the four training runs. Based on these run-specific timecourses, we measured the relative expression of the neural template representations of the trained objects, using pattern similarity. Our preliminary results are consistent with a link between competition during training and differentiation: the extent to which both trained objects are co-activated during drawing is associated with differentiation in MTL but not lower-level visual subregions. Together, this work provides new insight into the neural mechanisms by which visual production training can refine object representations.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×