Abstract
Faces convey much information about a person, including identity, gender, age, race, attractiveness and emotions. In addition, in sign languages, facial expressions are used to encode part of the grammar. It is however unknown which facial features define these facial expressions of grammar. We present a computational system that can automatically detect the shape of faces over several video sequences of signed sentences. The system then uses a functional support vector machine to determine which shape features best identify each of the grammatical rules. In particular, we consider positive and negative statements and positive and negative conditionals in American Sign Language (ASL). To this end, we use a collection of video sequences of several positive and negative conditionals and statements from nine native signers of ASL. We found that different types of 3D rotations are used to identity positive and negative statements and positive and negative conditionals. Not surprisingly headshakes are mostly associated to negation, while head nods are used in positive sentences, but other rotations of the head are also employed. The difference between conditionals and statements is in the spatio-temporal parameters of these head movements. We also identify a specific facial expression that is used by several subjects to emphasize or mark negation. The muscle activations associated to this expression are not correlated with those of any facial expression of emotion. We call this expressions the "not face." A psychophysical experiment showed that the "not face" is perceived as expressing negation even by non-signers. We also show which image shape feature are used to define the "not face" and each of the rotations of the head and define a computational model of these.
Meeting abstract presented at VSS 2012