September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Attentional Capture during Public Speaking in Virtual Reality Environment
Author Affiliations & Notes
  • Sihang Guo
    University of Texas Austin
  • Mikael Rubin
    University of Texas Austin
  • Ruohan Zhang
    University of Texas Austin
  • Karl Muller
    University of Texas Austin
  • Michael Telch
    University of Texas Austin
  • Mary Hayhoe
    University of Texas Austin
  • Footnotes
    Acknowledgements  F31 NRSA to Mikael Rubin
Journal of Vision September 2021, Vol.21, 2860. doi:https://doi.org/10.1167/jov.21.9.2860
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sihang Guo, Mikael Rubin, Ruohan Zhang, Karl Muller, Michael Telch, Mary Hayhoe; Attentional Capture during Public Speaking in Virtual Reality Environment. Journal of Vision 2021;21(9):2860. https://doi.org/10.1167/jov.21.9.2860.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It is well established that allocation of gaze is tightly linked to behavioral goals, but many situations in the natural world are loosely structured, and many of the events are unpredictable. Increasing evidence suggests that attentional capture can be context-dependent and modulated by attentional control (Luck et al., 2020). Yet, we know little about this mechanism in unstructured situations and what events might be attentionally salient. This is relevant for social interactions, where the responses of other people may carry important information. To examine the role of attentional capture in a social context, we asked 84 participants to give a 5-minute speech in a virtual reality environment. A pre-recorded 360 deg film of 5 audience members was presented in an Oculus DKII headset with an SMI eye-tracker. Individual audience members were instructed to act either interested (e.g. leaning forward or nodding), not interested (e.g. looking away or using a cell phone), or neutral (e.g. shifting in the chair). We characterize the speakers’ gaze in response to these audiences in terms of “capture” (allocated towards an audience member during an action) and “repulsion” (shifted from an audience member during an action). We found that audience actions reliably attracted gaze if they were in the field of view, a factor of four times more than when no actions were performed. Speakers also looked away from the audience member during an action (twice as likely compared with no-action baseline), although less probable than attraction. Interestingly, neither the size of the movement nor their indication of interest appears to have much effect. The results suggest that the speaker fixates on the audience member during an action in order to gain socially relevant information, and that the effectiveness of attentional capture mechanisms is strongly modulated by social relevance.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×