August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Characterizing the Manifolds of Dynamic Facial Expression Categorization
Author Affiliations
  • Ioannis Delis
    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom
  • Rachael Jack
    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom
  • Oliver Garrod
    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom
  • Stefano Panzeri
    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom
  • Philippe Schyns
    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom
Journal of Vision August 2014, Vol.14, 1384. doi:https://doi.org/10.1167/14.10.1384
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ioannis Delis, Rachael Jack, Oliver Garrod, Stefano Panzeri, Philippe Schyns; Characterizing the Manifolds of Dynamic Facial Expression Categorization. Journal of Vision 2014;14(10):1384. https://doi.org/10.1167/14.10.1384.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual categorization seeks to understand how logical combinations of visual cues (e.g. "wide opened left eye" and "opened mouth") provide singly necessary and jointly sufficient conditions to access categories in memory (e.g. "surprise"). Such combinations of visual cues constitute the categorization manifold underlying the information goals of face categorization mechanisms and are therefore critical for their understanding. Yet, no method currently exists to reliably characterize the visual cues of the categorization manifold (i.e. its dimensions, such as "wide opened eyes" and "opened mouth") and how they combine (i.e. the manifold topology which dictates e.g. that "wide opened eyes" and "opened mouth" can be used independently or must be used jointly). Here we present a generic method to characterize categorization manifolds and apply it to observers categorizing dynamic facial expressions of emotion. To generate information, we used the Generative Face Grammar (GFG) platform (Yu et al., 2012) that selects on each trial (N = 2,400 trials/observer) a random set of Action Units (AUs) and values for their parametric activation (Jack et al., 2012). We asked 60 naïve Western Caucasian observers to categorize the presented random facial animation according to one of six classic emotions ("happy", "surprise", "fear", "disgust", "anger", "sad", plus "don't know"). For each observer, we used a Non-negative Matrix Factorization (NMF) algorithm to extract AU x Time components of facial expression signals associated with the pereceptual categorization of each emotion. We then performed a Linear Discriminant Analysis (LDA) to select the components (i.e. manifold dimensions) that discriminate between six emotion categories (Quian Quiroga and Panzeri, 2009; Delis et al., 2013). Hence, the dimensions of the resultant categorization manifolds represent the strategies observers use to categorize the emotions. Our data show that observers use multiple categorization strategies, which constitute the atomic signals of emotion communication via facial expressions.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×