Abstract
Contextual associations play a significant role in facilitating object recognition in human vision. However, the role of contextual information in artificial vision remains elusive. We aim to examine whether contextual associations are represented in an artificial neural network, and if so, to understand at what layer they potentially have a role. Addressing this, we examined whether objects that share contextual associations (e.g., bicycle-helmet) are represented more similarly in convolutional neural networks than objects that do not share the same context (e.g., bicycle-fork), and further examined where in the network these context-based representational similarities emerge. As a comparison, we also examined the representational similarity of objects that belong to the same category (e.g., two different shoes) in contrast to objects that do not share the same category (e.g., shoe-brush). In a VGG16 neural network trained on ImageNet and focused on object categorization, representational similarity among objects that share a context (N = 70) is substantially higher than similarity among objects that do not share a context. Representational similarities were computed as the correlation between unit responses to pairs of images in and out of context (or category as a comparison). This context-based rise in similarity emerged at very early layers of the network, remarkably, at the same layer that category-based similarity was found. Category-based similarity was significantly larger than context-based similarity throughout the network. Pixel similarities across contextually paired objects were no greater than objects that do not share the same context. Thus, even though the network was designed for categorical object recognition, contextual relationships were evident in the network across early, mid, and late layers. This suggests that context is inherently preserved and represented across the network, and may have a critical role in facilitating object recognition both in humans and in artificial models.