Purchase this article with an account.
Ben Brown, Alan Johnston; Interactions between dynamic facial features are phase-dependent. Journal of Vision 2016;16(12):209. doi: https://doi.org/10.1167/16.12.209.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Dynamic expressions are comprised of spatially disparate features moving in temporally coordinated ways. Cook, Aichelburg & Johnston (Psychological Science, 2015) reported that oscillatory mouth movement causes concurrent eye-blinks to appear slower (suggesting interdependent processing of features), but whether it also impairs judgement of dynamic feature movement (i.e. discrimination performance) is unclear. Cook et al.'s illusion was specific to certain eye-mouth phase relationships, so here we asked whether performance similarly depends on relative phase of feature movement. Our stimuli contained sinusoidal eyebrow (raised-lowered) and mouth movement (opening-closing). Using a 2AFC design, we measured detectability of misaligned eyebrow movement across a range of temporal offsets relative to the mouth. Subjects viewed two animated facial avatars either side of fixation for 3 seconds. The eyebrows and mouth oscillated at 1.5Hz. The standard's eyebrows moved in phase, while the comparison's were misaligned by 12.5 degrees of phase angle. Subjects judged which face's eyebrows were misaligned. Eyebrow oscillation could lead mouths by 0° (eyebrows raised when mouth open) to 315° (mouth open before eyebrows raised) in increments of 45°, randomised across trials. Faces could be upright or inverted, in separate blocks. We tested 16 participants, and fixation was enforced using an eyetracker. Performance varied significantly as a function of eyebrow-mouth relative phase, and there was a significant interaction with face orientation. Polar phase plots show a bimodal circular distribution, with a performance advantage for upright faces when the mouth leads the eyebrows (180-315°). Collapsing over phase confirmed a significant interaction between orientation and phase domain (eyebrows ahead vs. mouth ahead). The data demonstrate a common encoding for facial features that is specific to upright faces and sensitive to the relative timing of their movement. This suggests processing of global facial motion is mediated by internal models with preferred temporal phase relationships between dynamic features.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only