Abstract
Objects in our environment are normally clustered in specific contextual settings, such as a “beach” or an “office”. Our knowledge of these settings consists of semantic information about the typical members of a scene, as well as information about typical spatial relations among these members. Are these two sources of associative information linked within a unified representation, or are they represented independently?
We investigated this question by using a priming task in which semantic and spatial relations between prime and target were independently manipulated, i.e., stimuli were either semantically congruent or incongruent, and were either properly or improperly positioned. Prime and target appeared successively, each for 250ms, and each such pair was presented twice during the course of the experiment.
Results for the first presentation showed a behavioral semantic-priming effect, with a semantic × spatial interaction in fMRI activation within object-processing regions (i.e., greater activation for spatially valid than invalid targets only in the semantically congruent condition; and for semantically congruent than incongruent only in the spatially valid condition). In the second presentation, both behavioral and fMRI measures showed repetition-priming effects, both mediated by a semantic × spatial interaction.
These results indicate that object recognition benefits from contextual representations that bind spatial and semantic information, and directly modulate activation in object-processing cortical regions (e.g., LO, Fusiform gyrus). Visual associative knowledge, therefore, facilitates perception by generating specific predictions regarding the identity of objects, as well as their spatial relations within a scene.
Support: NINDS R01-NS044319 and NS050615, McDonnell Foundation 21002039, and the MIND Institute.