Abstract
It has been shown that looking at moving lips facilitates speech perception. This facilitation, however, occurs only when the mouth is attended. Recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We therefore investigated whether visual facilitation of speech perception required awareness of lip movements. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether each word indicated a tool. While the participants listened to the words they saw the face of the speaker that articulated either the spoken word — the synchronous condition, or a different word of the same length — the asynchronous condition. Critically, the speaker’s face was either fully visible — the aware trials, or suppressed from awareness using Continuous Flash Suppression where the face was presented to one eye and a strong dynamic mask was presented to the other eye — the suppressed trials. The aware and suppressed trials were randomly intermixed. A dot-detection task was used to ensure that participants attended to the mouth region whether the face was visible or suppressed. A small fraction of the suppressed trials on which the face broke through into awareness were removed from the analysis. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, probably because participants discarded the visual information for being inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are still unconsciously processed by the visual system with sufficiently high temporal accuracy to facilitate speech perception based on crossmodal synchrony.
Meeting abstract presented at VSS 2012