Abstract
PURPOSE. Matsuzaki & Sato (1999) showed that facial expressions are perceived from purely dynamic components. In this study, we investigated the type of motion information used in the perception of facial expressions. METHODS. The stimuli were eighteen point-lights defined by luminance or contrast representing the eyebrows, the eyes and the mouth. The dynamic components, i.e. the differences between corresponding positions for natural and each emotional face (angry, happy, sad, and surprised) were calculated. Then, two types of point-light stimuli were generated. One was “FE1” made by shifting the points in neutral face by the amount of corresponding dynamic component. The other was “FE1+FE2” generated by shifting the points in FE1 further by the amount of corresponding dynamic component from another emotional face. In the experiment, the subjects were shown “FE1” and “FE1+FE2” successively and required to judge the facial expression of the stimuli. The inter-stimulus-interval (ISI) was manipulated in five steps. RESULTS. When the ISI was short, the subjects perceived the apparent motion and judged as FE2 corresponding to the dynamic component more often than FE1. However, the FE1 and the FE2 were perceived equally often for longer ISIs. This tendency was observed for both luminance-defined and contrast-defined stimuli, but the bias to the FE2 in the short ISI was smaller in contrast-defined than luminance-defined stimuli. CONCLUSIONS. These results indicate that, although it is less effective than first order motion, second order motion is certainly capable of supporting perception of facial expressions.