Our approach first characterizes the individual observer's mental representations of each facial expression as a combination of a small set of spatial and temporal dimensions that we call the categorization manifold (Seung & Lee,
2000). To learn this representation, we build on previous research on muscle synergies (d'Avella, Saltiel, & Bizzi,
2003; Delis, Panzeri, Pozzo, & Berret,
2014; Tresch, Cheung, & d'Avella,
2006) to introduce a novel method, which we call the space-by-time manifold, and identify a coordinate set that describes the categorization manifold. In brief, this method is based on the principles of nonnegative matrix factorization (NMF; Lee & Seung,
1999), it makes the assumption that the categorization manifold is separable in space and time (Delis et al.,
2014), and it uses linear discriminant analysis (LDA; Delis, Berret, Pozzo, & Panzeri,
2013b; Duda, Hart, & Stork,
2001) to identify the dimensions of the perceptual space that are most useful for visual categorization. With this space-by-time manifold decomposition, we address important questions in emotion communication using facial expressions. In particular, we determine which synergistic facial movements communicate emotions, and we isolate the specific movements that cause confusions between specific emotion categories (e.g., fear and surprise; see Ekman,
1992; Gagnon, Gosselin, Hudon-ven Der Buhs, Larocque, & Milliard,
2010; Jack, Blais, Scheepers, Schyns, & Caldara,
2009; Matsumoto & Ekman,
1989; Moriguchi et al.,
2005; Roy-Charland, Perron, Beaudry, & Eady,
2014) from those that resolve these confusions. That is, we can identify the low-dimensional space-by-time representation that predicts categorization behavior.