August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Amodal completion in facial expression aftereffect: an EEG study
Author Affiliations
  • Chengwen Luo
    Division of Psychology, School of Humanities and Social Science, Nanyang Technological University
  • Xiaohong Lin
    Faculty of Health Science, University of Macau
  • Edwin Burns
    Division of Psychology, School of Humanities and Social Science, Nanyang Technological University
  • Zhen Yuan
    Faculty of Health Science, University of Macau
  • Hong Xu
    Division of Psychology, School of Humanities and Social Science, Nanyang Technological University
Journal of Vision September 2016, Vol.16, 156. doi:https://doi.org/10.1167/16.12.156
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chengwen Luo, Xiaohong Lin, Edwin Burns, Zhen Yuan, Hong Xu; Amodal completion in facial expression aftereffect: an EEG study. Journal of Vision 2016;16(12):156. https://doi.org/10.1167/16.12.156.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The whole is greater than the sum of its parts. However, whether facial emotion perception is processed by holistic (whole) or local (parts) information is still in debate. The present study applies amodal completion to examine the contribution of holistic and local information to facial emotion adaptation. Amodal completion is ubiquitous in our daily life as we live in a clustered world. Objects that are partially occluded in a natural setting can be effortlessly perceived as complete wholes. We first generated a set of test faces whose expressions ranging from happy to sad. To manipulate amodal completion, three sets of adapting faces were also generated by manipulating the dynamics of facial parts (e.g., eyes and mouth), coherent or incoherent flickering facial parts. Participants were required to fixate on the central cross throughout the experiment. After passively exposed to the adapting amodal face, participants judged facial expression of the test faces as "happy" or "sad" on a two-alternative forced-choice (2-AFC) research paradigm via a key press, and electroencephalogram (EEG) activities was recorded simultaneously. Baseline condition without any adapting stimulus was also included. Behavioral results showed significant facial expression aftereffect when the adapting face was perceived as coherent (when amodally completion occurred), but weaker effect in the disrupted condition. The three amodal adaptors also modulate magnitude of both the early component (N170) and late components (~400ms) of the following test faces. As early component is suggested to indicate the response to the appearance of the face, and late component indicates the processing of emotinal information, our results indicate that both the local and holistic processes are critical for amodal completion in face emotion perception.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×