December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Inherent representations of contextual associations in neural networks and human behavior
Author Affiliations & Notes
  • Elissa Aminoff
    Fordham University
  • Shira Baror
    Fordham University
    New York University
  • Eric Roginek
    Fordham University
  • Daniel Leeds
    Fordham University
  • Footnotes
    Acknowledgements  This work was supported by Fordham University Interdisciplinary Grant
Journal of Vision December 2022, Vol.22, 4207. doi:https://doi.org/10.1167/jov.22.14.4207
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Elissa Aminoff, Shira Baror, Eric Roginek, Daniel Leeds; Inherent representations of contextual associations in neural networks and human behavior. Journal of Vision 2022;22(14):4207. https://doi.org/10.1167/jov.22.14.4207.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Contextual associations play an important role in human vision and understanding. For example, objects that are contextually congruent with the environment are recognized faster. Do these same contextual associations play a role in artificial vision? If so, we would expect contextual associations between objects (e.g., tent – sleeping bag) to be included in the object representations within a convolutional neural network (CNN) designed for object recognition, even when the network is not explicitly trained to recognize contextual associations. To test this, we examined the similarity in CNN representations between pairs of contextually related object pairs (N=73). Stimuli were photographs of objects presented against a white background to ensure contextual associations are not merely a product of background information. Across each layer of the CNN, we compared the similarity in object representations across pairs of contextually related objects and unrelated objects. We found that across almost all layers of the CNN (except the first) contextually related objects had more similar representations across the units of the CNN than unrelated objects. This was true across 10 different CNNs tested that varied in number of layers, training data, and recurrent/non-recurrent architecture. Critically, we compared these context representations to human behavior to determine whether the contextual associations represented in a CNN were relevant to human vision. We found the similarity of object representations due to contextual associations correlated with human judgments on the relatedness of contextually related object pairs. The more similar the object representation in the CNN, the faster and more likely humans labeled the objects as contextually related. Most interestingly, despite context being represented across almost all layers of the CNN, correlation with behavior only emerged at the later layers. This segmentation in model--behavioral correlation suggested that only high level or complex regularities relating to context are relevant to human behavior.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×