Abstract
Achieving a meaningful representation of the visual environment, one that can be useful for navigating, planning and acting, requires representing objects and the relations between them. We know that object recognition is efficient, i.e., reportedly fast and automatic; how fast and automatic is the processing of relations? We studied this, focusing on the fundamental relations containment and support, using frequency-tagging electroencephalography (FT-EEG). FT-EEG allows to pinpoint automatic stimulus-locked responses. First, we tested –and demonstrated– that relations between multiple objects are accessed as fast and automatically as the object themselves. Twenty adults viewed a sequence of images with object pairs at a base-frequency (2.5 Hz), where every four stimuli illustrating one relation (support: book on table, knife on chop-board), one oddball-stimulus illustrating the other relation appeared (containment: spoon in cup) (oddball-frequency: 0.625 Hz). EEG signals indicated responses at both frequencies, meaning that participants processed each image and spontaneously detected changes in the relation carried by oddball-stimuli. A control condition demonstrated that the oddball-response was not due to a regular repetition of the objects (spoon and cup). Since the above effect was found with oddball stimuli that involved (different instances of) the same objects (e.g., always spoon in cup), we tested whether the same effect could be found when only the relation remained identical (e.g., containment), while the objects changed for every oddball-stimulus (spoon in cup, fish in bowl). Here, the oddball-response remained significant, demonstrating that it reflected encoding of the relation itself, regardless of the objects involved in it. Finally, the oddball-response remained unchanged when participants were explicitly instructed to attend to the relation, indicating that the encoding of relations is independent from attention. We conclude that relations between objects are encoded rapidly, automatically upon stimulus presentation and in a manner that generalizes over a broad class of objects.