Journal of Vision Cover Image for Volume 16, Issue 12
August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Extracting Human Face Similarity Judgments: Pairs or Triplets?
Author Affiliations
  • Linjie Li
    Electrical and Computer Engineering Department,University of California San Diego
  • Amanda Song
    Cognitive Science Department,University of California San Diego
  • Vicente Malave
    Cognitive Science Department,University of California San Diego
  • Garrison Cottrell
    Computer Science Engineering Department,University of California San Diego
  • Angela Yu
    Cognitive Science Department,University of California San Diego
Journal of Vision September 2016, Vol.16, 719. doi:https://doi.org/10.1167/16.12.719
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Linjie Li, Amanda Song, Vicente Malave, Garrison Cottrell, Angela Yu; Extracting Human Face Similarity Judgments: Pairs or Triplets?. Journal of Vision 2016;16(12):719. https://doi.org/10.1167/16.12.719.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Understanding how humans assess facial similarity is an important problem for both machine learning applications and cognitive science. Two classical techniques for extracting human similarity judgments are in broad use, pairwise rating and triplet ranking. While there are obvious algorithmic consequences based on these methods, there has been little explicit comparison of the informational utility of the two. Here, we present face similarity judgment data, in both pairwise and triplet forms, collected on Amazon Mechanical Turk. We demonstrate that triplet data are more informative of both individual judgments and heterogeneity among individuals. In the experiment, we present seven faces to each subject in both formats, for a total of 35 triplets and 21 pairs, repeated 4 times. We use the 10k US Adult Faces database provided by Aude Oliva's group at MIT. We convert the pairwise data into equivalent triplet data, and then use identical measures to compare the two. We first compute the cross-correlation over repeated responses for each triplet, and find that the self-consistency is higher for triplet data. Moreover, cross-correlation of responses across subjects for the same triplets suggests distinct subgroups of individuals, and this multi-cluster pattern is less evident in the equivalent pairwise data. We further propose a statistical model to quantify the information gain of both methods and our results suggest that triplets indeed provide more information than pairs. Overall, triplet ranking is more informative than pairwise rating for eliciting facial similarity judgments from humans. It has often been observed that humans give more self-consistent responses when reporting relative preferences than assigning numeric values to individual items, especially in complex judgments involving high-dimensional inputs. Apparently, forcing humans to assign numerical values to complex judgments can not only make them appear less consistent, but also can corrupt the information available in simpler relative ranking responses.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×