Abstract
It is well established that allocation of gaze is tightly linked to behavioral goals, but many situations in the natural world are loosely structured, and many of the events are unpredictable. Increasing evidence suggests that attentional capture can be context-dependent and modulated by attentional control (Luck et al., 2020). Yet, we know little about this mechanism in unstructured situations and what events might be attentionally salient. This is relevant for social interactions, where the responses of other people may carry important information. To examine the role of attentional capture in a social context, we asked 84 participants to give a 5-minute speech in a virtual reality environment. A pre-recorded 360 deg film of 5 audience members was presented in an Oculus DKII headset with an SMI eye-tracker. Individual audience members were instructed to act either interested (e.g. leaning forward or nodding), not interested (e.g. looking away or using a cell phone), or neutral (e.g. shifting in the chair). We characterize the speakers’ gaze in response to these audiences in terms of “capture” (allocated towards an audience member during an action) and “repulsion” (shifted from an audience member during an action). We found that audience actions reliably attracted gaze if they were in the field of view, a factor of four times more than when no actions were performed. Speakers also looked away from the audience member during an action (twice as likely compared with no-action baseline), although less probable than attraction. Interestingly, neither the size of the movement nor their indication of interest appears to have much effect. The results suggest that the speaker fixates on the audience member during an action in order to gain socially relevant information, and that the effectiveness of attentional capture mechanisms is strongly modulated by social relevance.