Purchase this article with an account.
Fatos Berisha, Alan Johnston, Peter McOwan; Spatial location of critical facial motion information for PCA-based performance-driven mimicry. Journal of Vision 2007;7(9):495. doi: https://doi.org/10.1167/7.9.495.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Visual information from different areas of the face does not contribute equally to human observers' ability to categorise faces. The spatial location of task-specific diagnostic information in static images of faces has been revealed using occlusion masks with randomly located circular Gaussian windows, or ‘bubbles’ (Gosselin & Schyns, 2001, Vision Research, 4:2261–2271). Our study tested whether the ‘bubbles’ method could reveal spatial locations of facial information pertinent for photo-realistic animation of an automatically created and driven moveable face model generated from example footage of a face in motion (Cowe, 2003, PhD Thesis, UCL, London). The face model was created by vectorising a sequence of a face in motion, extracting the image changes and motion fields using an optic flow algorithm and calculating a set of basis actions by application of PCA. This model, or avatar, was driven by instances of the same sequence, processed in the same way, but occluded with 5000 random ‘bubble’ masks (23 ‘bubbles’, standard deviation: 5pixels). Resulting mimicries were compared to the ‘ground-truth’ mimicry obtained by a non-occluded driver sequence, using a Pearson correlation metric measuring the similarity between PC coefficients extracted from the occluded driver sequences and those from the ‘ground-truth’. ‘Bubbles’ resulting in mimicries that were highly correlated with the ‘ground-truth’ mimicry were added up and divided by the sum of all ‘bubbles’. The resulting image highlights those facial areas transmitting visual information important for photo-realistic mimicry. The most important areas are those around and including the mouth and eyes. These regions overlap but are not identical to areas of maximum pixel-value variance. Visual inspection of resulting mimicries shows that the PCA face model is robust enough to enable recovery of some aspects of the expression in the avatar in those areas occluded in the driver sequence, but the expression is generally muted.
This PDF is available to Subscribers Only