Purchase this article with an account.
Amy Dawel, Elinor McKone, Jessica Irons, Richard O'Kearney, Romina Palermo; Look out! Gaze-cueing is greater from fearful faces in a dangerous context for children and adults. Journal of Vision 2013;13(9):595. doi: 10.1167/13.9.595.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
People shift their attention to follow where other people look; thus they are faster to respond to objects that are looked at (valid trials) than objects that are not looked at (invalid trials). Gaze-cueing is measured as the difference in RTs for invalid minus valid trials. In this study, we investigate whether non-predictive gaze-cueing is modulated by: facial expression (fearful, happy, neutral); the affective decision required about the gazed-at object (fear-relevant decision of dangerous versus safe animal, or a neutral control of blue versus orange bar); and SOA (300 or 700 ms). For the first time, we also investigate whether there are differences between adults and 8-12 year old children, who completed the fear-relevant decision condition only. Adults showed greater gaze-cueing from fearful than happy or neutral faces at both SOAs, but only in the fear-relevant decision condition. Like adults, children showed greater gaze-cueing from fearful than happy faces in the fear-relevant decision condition; however, this fear-advantage over happy faces was only observed at SOA 300, and there was no fear-advantage relative to neutral faces at either SOA. An additional analysis investigated whether responses overall differed for dangerous and safe animals. Adults responded faster to dangerous than safe animals when cued by a face (regardless of expression) but not when cued by a non-social stimulus (arrow), suggesting a social-specific advantage. Children however showed no advantage for dangerous animals regardless of cue type. Overall results suggest that 8-12 year old children are partially adult-like in that they show some fear-advantage for gaze-cueing when there is a dangerous context, but are not yet fully mature in their ability to integrate social and contextual information to drive shifts in attention.
Meeting abstract presented at VSS 2013
This PDF is available to Subscribers Only