Abstract
High-level neural representation of the visual world is widely thought to be object-oriented, i.e., neural codes for different features (e.g., colour, shape) appear to be linked together, enabling representation of coherent objects. In addition to visual features, many objects, e.g., spiders, expressive faces, have obvious emotional features that are encoded using very different neural networks than those used to represented visual features. Are these emotional features also incorporated into object-level representations? To test this, we measured affective priming (AP) using a two-object display. AP is faster judgement of the affective valence of a target (good versus bad) when it is just previously primed by a stimulus sharing the same affective valence than when primed with an oppositely valenced stimulus. Participants viewed two boxes on either side of fixation; one was filled briefly with an emotional prime (spider or flower image) and then it or the other box moved to the centre. A target image (happy or angry face) was presented on the centre box and the participant judged it as positive or negative as quickly as possible. Object-oriented perception of emotional information predicts AP only when target and prime appear on the same, as opposed to a different, box. In contrast, automatic attitude activation theories of AP predict no difference in the same versus different object conditions. In line with object-oriented predictions, we found significant AP (i.e., participants were faster to respond to a target when it's emotion valence was congruent with that of the prime) only when prime and target appeared on the same object. This finding indicates that even briefly presented emotional information can be incorporated into an otherwise neutral object's representation (box) in such a way that it influences emotional judgements about that object subsequently. Emotional information appears to be included in object-level representations.
This work was supported by an ESRC studentship to CKB.