Purchase this article with an account.
Oliver Garrod, Hui Yu, Martin Breidt, Cristobal Curio, Philippe Schyns; Reverse correlation in temporal FACS space reveals diagnostic information during dynamic emotional expression classification. Journal of Vision 2010;10(7):700. doi: 10.1167/10.7.700.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
Reverse correlation experiments have previously revealed the locations of facial features crucial for recognition of different emotional expressions, and related these features to brain electrophysiological activity [SchynsEtal07]. However, in social perception we expect the generation and encoding of communicative signals to share a common framework in the brain [SeyfarthCheney03] and neither ‘Bubbles’ [GosselinSchyns03] nor white noise based manipulation effectively target the input features underlying facial expression generation - the combined activation of sets of facial muscles over time. [CurioEtal06] propose a motion-retargeting method that controls the appearance of facial expression stimuli via a linear 3D Morphable Model [BlanzVetter99] composed of recorded Action Units (AUs). Each AU represents the surface deformation of the face, given the full activation of a particular muscle or muscle group taken from the FACS [EkmanFriesen79] system. The set of weighted linear combinations of AUs are hypothesised as a generative model for the set of typical facial movements for this actor.
Here we report the outcome of a facial emotion reverse correlation experiment with one such generative AU model over a space of temporally parameterized AU weights. On each trial, a random selection of between 1 and 5 AUs are selected. Random timecourses for selected AUs are generated according to 6 temporal parameters (see supplementary figure). The observer rates the stimulus for each of the 6 ‘universal emotions’ on a continuous confidence scale from 0 to 1 and, from these ratings, optimal AU timecourses (timecourses whose temporal parameters maximize the expected rating for a given expression) are derived per expression and AU. These are then fed as weights into the AU model to reveal the feature dynamics associated with the expression. This method extends Bubbles and reverse correlation techniques to a relevant input space – one that makes explicit hypotheses about the temporal structure of diagnostic information.
This PDF is available to Subscribers Only