Abstract
One challenging task in face recognition is the identity-independent estimation of head pose from single images across a wide range of pose angles. An essential part of many approaches is the extraction of sparse facial features such as eyes, nose or mouth (Gee, Cipolla, 1994, Image & Vision Computing 12(10)). However, when the face is rotated in depth, distinct features may partly be occluded or disappear completely. Given the changes in appearance or visibility of individual features it remains unclear which features are suitable for pose estimation. Here, we present a neural model that creates an abstract representation of perceptually relevant features from single images of oriented faces. The major model stages are summarized as follows: (1) Oriented contrasts are detected utilizing oriented band-pass filters, followed by local competition. (2) Locally collinear features are grouped and enhanced by integrating context information from the extracted orientations. (3) Recurrent feedback from the grouping stage to initial responses iteratively accentuates filter responses that comprise smoothly curved flow patterns of image orientations. The model output can be visualized as a sketch-like drawing of a face emphasizing pose specific features which are important for feature-based head pose estimation. Furthermore, we evaluate our model by using the sketch representation as basis for existing feature-based head pose algorithms (Krüger et al., 1997, Image & Vision Computing 15(8)). In conclusion, our approach not only provides a tool for visualizing important features of faces, it also yields a sparse representation of faces which can be directly integrated into face recognition tasks.
This work has been supported in part by a grant from the Ministry of Science, Research and the Arts of Baden-Württemberg (Az: 23-7532.24-13-19/1).