Abstract
The whole is greater than the sum of its parts. However, whether facial emotion perception is processed by holistic (whole) or local (parts) information is still in debate. The present study applies amodal completion to examine the contribution of holistic and local information to facial emotion adaptation. Amodal completion is ubiquitous in our daily life as we live in a clustered world. Objects that are partially occluded in a natural setting can be effortlessly perceived as complete wholes. We first generated a set of test faces whose expressions ranging from happy to sad. To manipulate amodal completion, three sets of adapting faces were also generated by manipulating the dynamics of facial parts (e.g., eyes and mouth), coherent or incoherent flickering facial parts. Participants were required to fixate on the central cross throughout the experiment. After passively exposed to the adapting amodal face, participants judged facial expression of the test faces as "happy" or "sad" on a two-alternative forced-choice (2-AFC) research paradigm via a key press, and electroencephalogram (EEG) activities was recorded simultaneously. Baseline condition without any adapting stimulus was also included. Behavioral results showed significant facial expression aftereffect when the adapting face was perceived as coherent (when amodally completion occurred), but weaker effect in the disrupted condition. The three amodal adaptors also modulate magnitude of both the early component (N170) and late components (~400ms) of the following test faces. As early component is suggested to indicate the response to the appearance of the face, and late component indicates the processing of emotinal information, our results indicate that both the local and holistic processes are critical for amodal completion in face emotion perception.
Meeting abstract presented at VSS 2016