Purchase this article with an account.
Touchai Thawai, Sakol Teeravarunyou, Geoffrey Woodman, Sirawaj Itthipuripat; The degree of gaze-induced shifts in overt attention explains inter-subject variability in long-term memory performance. Journal of Vision 2018;18(10):1193. https://doi.org/10.1167/18.10.1193.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Gaze is an important cue thought to facilitate effective social interaction and communication. Previous studies have shown that gaze could induce an attentional shift toward a location that match gaze direction and this attentional shift could in turn enhance sensory information processing in simple perceptual tasks. However, less is known about how gaze cues may influence selective attention and long-term memory in more complex real-world tasks. To examine this issue, we monitored eye movements via an infrared eye-tracking camera in 95 male and female adults, while they were reading and listening to sentences containing autographical information. On some trials, the sentence was presented by itself. On other trials, there was also an animated facial stimulus, which either gazed toward the sentence (congruent), gazed directly to the viewer (neutral), or gazed away from the sentence (incongruent). We found that the congruent gaze cue effectively induced overt shifts of attention to the sentence as the probability that the eyes landed on the sentence increased compared to the incongruent gaze cue. Moreover, the degree of gaze-induced attentional modulations in the eye movement data positively correlated with the degree of attentional modulations in long-term memory performance. Taken together, these results suggest that gaze could induce overt attentional shifts toward relevant information in a complex behavioral task that requires learning and memory. Moreover, the attentional enhancement of memory performance varies across individuals depending on the degree at which social cues influenced attentional and oculomotor systems.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only