August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Predicting and categorizing online video success from a computational model of face personality judgments
Author Affiliations
  • Samuel Anthony
    Department of Psychology, Harvard University
  • Ken Nakayama
    Department of Psychology, Harvard University
Journal of Vision September 2016, Vol.16, 717. doi:10.1167/16.12.717
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Samuel Anthony, Ken Nakayama; Predicting and categorizing online video success from a computational model of face personality judgments. Journal of Vision 2016;16(12):717. doi: 10.1167/16.12.717.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

There is an extensive literature on the ability of humans to predict a range of outcome measures (including electoral success, teacher ratings, and corporate profits) from impoverished "thin slices" comprising brief videos or still images of an individual face. Data-driven approaches to quick, unreflective personality judgments (Oosterhof & Todorov 2008, Vernon et al. 2014) have shown that relatively stable components emerge from principal component analysis of unconstrained descriptions of personality traits. In our previous work (Anthony et al., VSS 2015) we have shown that a simple V1-like feature set is sufficient to build a computer vision model capable of explaining a significant percentage of reliable human judgments of face personality traits. The traits we have investigated, which include trustworthiness, dominance, age and intelligence, are highly correlated with the initial principal components derived in the work of Oosterhof and Todorov and Vernon et al. In the present work, we investigate whether the computational models we have developed have the power to predict relevant outcome measures for online video content. We generated trait ratings for trustworthiness, dominance, perceived age and IQ for all detected faces in frames sampled at 3 fps from a corpus of tens of thousands of YouTube videos. For a majority of keywords investigated mean scores of at least one model correlated significantly with view counts. We categorized these videos by commonly-occurring keyword and investigated what combination of model outputs was best able to predict the number of views received by videos tagged with that keyword. These trait signatures were highly characteristic for specific keywords and reliable within a given keyword. Using these trait signatures, a clustering or taxonomy of youtube content types is possible, where keyword-tagged types are grouped by the perceived facial traits (of individuals featured in that content) that correlate with video popularity.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×