Abstract
Objects in our everyday environment rarely appear in isolation. But rather, they often appear together and interact in a predictable manner. The representation of such contextual grouping is useful in both object recognition and in our subsequent interaction with the objects. Indeed, behavioral studies have demonstrated that objects pairs are processed more efficiently when they appear together in a contextually consistent than inconsistent manner. However, the neural mechanism mediating such enhancement is still under debate. In the current study, using fMRI multi-voxel pattern analysis, we examined neural representations of contextually consistent and inconsistent object pairs in human retinotopically defined early visual areas as well as object processing regions in lateral and ventral visual cortex. In these pre-defined regions of interest, we obtained fMRI response patterns for two objects shown either in a contextually consistent manner (i.e., a cake above a cake stand, and a cooking pot above a burner) or in a contextually inconsistent manner (i.e., a cake above a burner, and a cooking pot above a cake stand). We also obtained fMRI response patterns for each object shown alone. We then linearly combined the patterns for the individual objects and tested the similarity between our synthesized two-object patterns and the actual two-object patterns using a linear classifier. In multiple ventral visual areas, preliminary results showed that the difference between the actual and the synthesized two-object patterns was greater for the contextually consistent than the contextually inconsistent object pairs. This suggests stronger nonlinear interaction between the two objects when they form a contextually consistent pair. These results illustrate one way in which contextual grouping may be represented in the human brain
Meeting abstract presented at VSS 2015