Abstract
Humans have the ability to process facial stimuli and compare them with facial information retrieved from memory. Cues from an input can provide information that enables the observer to infer bits of information about the signaller’s identity, age, gender and other characteristics. Reverse correlation methods can reveal the memorized information content, formed as a classification image, that enables the perceiver to resolve these categorization tasks. The most typical method adds white noise to a fixed template serving as a basis. Though powerful, its main drawback is that it typically produces a low resolution greyscale classification image with configural cues restricted to those of the basis image. Here we present a novel method to reveal a colourful high resolution classification image with no configural limitation. We demonstrate the feasibility of the method with two different categories: (i) facial untrustworthiness and (ii) identity. A set of 40 color-calibrated images of female faces served as a ‘face basis set’. All faces were delineated by 189 landmarks on a fixed set of salient facial locations. On each trial, the observer saw a set of masked and randomly chosen images. The masked image comprised a pixel-wise multiplication of a randomly drawn image from the face basis set with a two-dimensional mixture of randomly placed Gaussians (a.k.a. ‘Bubbles’, Gosselin & Schyns, 2001). Observers were instructed to select, from the masked images, the one that best fitted the target category (Untrustworthiness in experiment 1 and identity in experiment 2). To reconstruct the classification images, we first computed the average template of the landmarks of the positively identified images (weighted by the height of the sampling Gaussians). We then warped the positively identified masked images to that average template. Finally, we averaged and scaled the warped images to produce a colored classification image with no configural limitation.
Meeting abstract presented at VSS 2015