Abstract
There is an extensive literature on the ability of humans to predict a range of outcome measures (including electoral success, teacher ratings, and corporate profits) from impoverished "thin slices" comprising brief videos or still images of an individual face. Data-driven approaches to quick, unreflective personality judgments (Oosterhof & Todorov 2008, Vernon et al. 2014) have shown that relatively stable components emerge from principal component analysis of unconstrained descriptions of personality traits. In our previous work (Anthony et al., VSS 2015) we have shown that a simple V1-like feature set is sufficient to build a computer vision model capable of explaining a significant percentage of reliable human judgments of face personality traits. The traits we have investigated, which include trustworthiness, dominance, age and intelligence, are highly correlated with the initial principal components derived in the work of Oosterhof and Todorov and Vernon et al. In the present work, we investigate whether the computational models we have developed have the power to predict relevant outcome measures for online video content. We generated trait ratings for trustworthiness, dominance, perceived age and IQ for all detected faces in frames sampled at 3 fps from a corpus of tens of thousands of YouTube videos. For a majority of keywords investigated mean scores of at least one model correlated significantly with view counts. We categorized these videos by commonly-occurring keyword and investigated what combination of model outputs was best able to predict the number of views received by videos tagged with that keyword. These trait signatures were highly characteristic for specific keywords and reliable within a given keyword. Using these trait signatures, a clustering or taxonomy of youtube content types is possible, where keyword-tagged types are grouped by the perceived facial traits (of individuals featured in that content) that correlate with video popularity.
Meeting abstract presented at VSS 2016