Animate motion contains not only information about the actions of a person but also about his or her identity, intentions and emotions. We can recognize a good friend by the way he or she moves and we can attribute age, gender and other characteristics to an unfamiliar person. How is such information encoded in visual motion data and how can it be retrieved by the visual system? We present an algorithm that transforms visual motion data such that they can be successfully approached with linear methods from statistics and pattern recognition. The transformation is based on a linear decomposition of postural data into a few components that change with sinusoidal temporal patterns. The components repeat consistently across subjects such that linear combinations of existing motion data result in smooth, meaningful interpolations. Examples are presented how this model can be used to discriminate between different types of motion (here: walking vs. running) and how it can be used to classify instances of the same type of motion (e.g. walking) in terms of characteristics of the actor (here: gender classification). Since the transformation is reversible the model can also be used for synthesis and modeling of animate motion. Therefore, it serves not only as a model for biological information processing but has also implications for both computer vision and computer graphics.
This research is funded by the Volkswagen Foundation.