Purchase this article with an account.
Yung-Hao Yang, Su-Ling Yeh; Oculomotor Response Precedes Awareness Access of Multisensory Emotional Information Under Interocular Suppression. Journal of Vision 2017;17(10):192. doi: 10.1167/17.10.192.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Previous studies have shown that emotional salient information can attract attention in the absence of visual awareness. Since affective voice can enhance emotional meaning of facial expression, we tested whether emotional congruency of affective voices can also modulate attention allocation of invisible facial expressions. We adopted the continuous flash suppression (CFS) paradigm to render facial expressions (e.g., happy and fearful) invisible to the participants, and manipulated affective voices (e.g., laughing and screaming) to generate either congruent or incongruent condition. We measured the time releasing from interocular suppression and simultaneously recorded eye movement as an index of attention allocation. The results showed that happy faces have shorter first saccade latency and shorter suppression time than fearful face, the latter result had been replicated by experiments with different data bases. Importantly, congruent affective voices revealed shorter dwell time and shorter suppression time than incongruent counterparts. The results suggest that affective voice can influence the attention attraction of invisible facial expression. In addition, these results also provide new evidence that emotional meaning of facial expression can be extracted under interocular suppression and thus integrated with affective voice. Keywords: Facial expression, multisensory integration, unconscious processing, eye-movement
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only