Abstract
Basic visual features such as contrast are processed in a highly nonlinear fashion, resulting in ‘dipper’ shaped functions in discrimination experiments. Previous work has applied a similar paradigm to investigate the representation of higher level properties such as facial identity (Dakin & Omigie, 2009, Vision Res, 49, 2285-2296). Here we ask whether emotional expressions are processed nonlinearly by measuring discrimination thresholds for six emotions (angry, happy, sad, fear, disgust, surprise) morphed along a continuum relative to neutral. Using a 2IFC paradigm, we estimated discrimination thresholds relative to six ‘pedestal’ morph levels between 0% and 75%. The participants’ (N=5) task was to indicate which of two faces (pedestal, or pedestal plus increment) conveyed the strongest expression. We found evidence of facilitation at low morph levels (~15%) and masking at higher levels (>60%), indicating the existence of a nonlinearity in the neural representation of expression, comparable to that reported for lower level visual features. We then asked whether facial features are integrated across the face before or after this nonlinearity by keeping the expression in one half of the face (top or bottom) fixed at neutral, and applying the pedestal and increment expressions to the other half. Sensitivity decreased by around a factor of two along the entire dipper function, relative to the whole-face condition, suggesting that facial expressions are integrated before nonlinear transduction. Finally we assessed the amount of interference between the two halves of the face, by fixing the expression in one half at a given level (the ‘mask’), and applying the target increment to the other half. This produced a strong masking effect, such that target expressions needed to approach the level of the mask to be detected. This is evidence for competition between the neural representations of different facial features.
Meeting abstract presented at VSS 2015