June 2007
Volume 7, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2007
Spatial location of critical facial motion information for PCA-based performance-driven mimicry
Author Affiliations
  • Fatos Berisha
    Department of Psychology, University College London, London, UK
  • Alan Johnston
    Department of Psychology, University College London, London, UK
  • Peter McOwan
    Department of Computer Science, Queen Mary, University of London, London, UK
Journal of Vision June 2007, Vol.7, 495. doi:https://doi.org/10.1167/7.9.495
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Fatos Berisha, Alan Johnston, Peter McOwan; Spatial location of critical facial motion information for PCA-based performance-driven mimicry. Journal of Vision 2007;7(9):495. https://doi.org/10.1167/7.9.495.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual information from different areas of the face does not contribute equally to human observers' ability to categorise faces. The spatial location of task-specific diagnostic information in static images of faces has been revealed using occlusion masks with randomly located circular Gaussian windows, or ‘bubbles’ (Gosselin & Schyns, 2001, Vision Research, 4:2261–2271). Our study tested whether the ‘bubbles’ method could reveal spatial locations of facial information pertinent for photo-realistic animation of an automatically created and driven moveable face model generated from example footage of a face in motion (Cowe, 2003, PhD Thesis, UCL, London). The face model was created by vectorising a sequence of a face in motion, extracting the image changes and motion fields using an optic flow algorithm and calculating a set of basis actions by application of PCA. This model, or avatar, was driven by instances of the same sequence, processed in the same way, but occluded with 5000 random ‘bubble’ masks (23 ‘bubbles’, standard deviation: 5pixels). Resulting mimicries were compared to the ‘ground-truth’ mimicry obtained by a non-occluded driver sequence, using a Pearson correlation metric measuring the similarity between PC coefficients extracted from the occluded driver sequences and those from the ‘ground-truth’. ‘Bubbles’ resulting in mimicries that were highly correlated with the ‘ground-truth’ mimicry were added up and divided by the sum of all ‘bubbles’. The resulting image highlights those facial areas transmitting visual information important for photo-realistic mimicry. The most important areas are those around and including the mouth and eyes. These regions overlap but are not identical to areas of maximum pixel-value variance. Visual inspection of resulting mimicries shows that the PCA face model is robust enough to enable recovery of some aspects of the expression in the avatar in those areas occluded in the driver sequence, but the expression is generally muted.

Berisha, F. Johnston, A. McOwan, P. (2007). Spatial location of critical facial motion information for PCA-based performance-driven mimicry [Abstract]. Journal of Vision, 7(9):495, 495a, http://journalofvision.org/7/9/495/, doi:10.1167/7.9.495.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×