September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Synthesizing images with deep neural networks to manipulate representational similarity and induce representational change
Author Affiliations & Notes
  • Jeffrey D Wammes
    Department of Psychology, Yale University
  • Kenneth A Norman
    Department of Psychology, Princeton University
    Princeton Neuroscience Institute, Princeton University
  • Nicholas B Turk-Browne
    Department of Psychology, Yale University
Journal of Vision September 2019, Vol.19, 202d. doi:https://doi.org/10.1167/19.10.202d
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeffrey D Wammes, Kenneth A Norman, Nicholas B Turk-Browne; Synthesizing images with deep neural networks to manipulate representational similarity and induce representational change. Journal of Vision 2019;19(10):202d. doi: https://doi.org/10.1167/19.10.202d.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans have a seemingly limitless capacity for learning, despite having finite neural real estate. Consequently, different pieces of information must be represented in overlapping neural populations. Prior work has shown that statistical learning can affect this overlap, both increasing (i.e., integration) and decreasing (i.e., differentiation) representational overlap in hippocampus (Schapiro et al., 2012). Whether representations integrate or differentiate may depend on their initial degree of overlap, with high overlap leading to integration and moderate overlap leading to differentiation. Here we report an approach for controlling neural overlap in specific visual regions, in order to manipulate whether statistical learning causes integration or differentiation. Pairs of images were synthesized using a convolutional neural network (CNN), pre-trained for object recognition. Each pair was generated to achieve a specified similarity level, operationalized as the correlation between unit activities in later model layers coding for higher-order visual features. To validate the approach, human participants sorted images according to visual similarity. Model-defined visual similarity was correlated with the resulting pairwise distances. We then chose eight pairs of images, varying parametrically in model similarity, and embedded the pairs in a statistical learning paradigm during fMRI. Before and after learning, we extracted patterns of voxel activity for each of the 16 images. A searchlight analysis revealed clusters in lateral occipital cortex and parahippocampal gyrus where neural pattern similarity tracked our predefined model similarity parametrically, indicating that we were able to control neural overlap with model-based image synthesis. Following learning, we found that hippocampal representations of moderately similar image pairs differentiated from one another, whereas highly similar image pairs integrated with one another. We are now conducting follow-up studies using fMRI to test the efficacy of model-based image synthesis at various levels of the visual processing hierarchy and behavioral experiments to test consequences for perception and memory.

Acknowledgement: This work was supported by NIH R01 MH069456 and an NSERC postdoctoral fellowship (JDW). 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×