September 2015
Volume 15, Issue 12
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Object constancy from view-based models of the face.
Author Affiliations
  • Jevgenija Beridze
    Cognitive, Perceptual and Brain Sciences, University College London
  • Shin'ya Nishida
    NTT Communication Science Laboratories, Nippon Telegraph & Telephone Corporation, Atsugi Kanagawa, Japan
  • Alan Johnston
    Cognitive, Perceptual and Brain Sciences, University College London
Journal of Vision September 2015, Vol.15, 418. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jevgenija Beridze, Shin'ya Nishida, Alan Johnston; Object constancy from view-based models of the face.. Journal of Vision 2015;15(12):418.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Recognising a familiar face in various poses is a challenging computational task that the human visual system can solve with remarkable ease. Current theory favours a view-based approach to object constancy over an object-centred approach but are different views of an object only related through association? We investigated whether it is possible to reconstruct a view of the face from another viewpoint. We recorded high quality, high frame rate videos of the moving human face, simultaneously from six cameras separated by 19 degrees in a horizontal arc around the face. The videos sequences from each perspective view were converted into separate image files and concatenated to form a sampled panoramic view of the face for each time point. We then expressed each multi-view image in terms of the vector field required warp the multi-view frame onto a reference multi-view frame and the resulting warped multi-view image. Principal components analysis (PCA) was used model the variation in these vectors. A PCA model can be cast as a content addressable memory. From the multi-view vector we removed the data from all but one view, setting the values for those other views to zero. We projected this partial representation back into the principal component model, regenerating the other views. We found that the reconstruction was effective, although there were some differences from the ground truth (reconstruction using all views). We found that summing up loadings for the partial representations projected into the space gave the loading for the full view representation (ground truth). We were able to substantially improve the single view reconstruction by multiplying the resulting loadings by a scale factor. The scale factor differed over views but was consistent over frames. Object constancy for the face may therefore be achieved through the mapping of relative information between view based-models.

Meeting abstract presented at VSS 2015


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.