August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Dynamics of gaze and body while viewing omnidirectional stimuli
Author Affiliations & Notes
  • Erwan David
    Scene Grammar Lab, Goethe University Frankfurt
  • Melissa Vo
    Scene Grammar Lab, Goethe University Frankfurt
  • Footnotes
    Acknowledgements  This work was supported by SFB/TRR 26 135 project C7 to Melissa L.-H. Võ and the Hessisches Ministerium für Wissenschaft und Kunst (HMWK; project ‘The Adaptive Mind’).
Journal of Vision August 2023, Vol.23, 5123. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Erwan David, Melissa Vo; Dynamics of gaze and body while viewing omnidirectional stimuli. Journal of Vision 2023;23(9):5123.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Recent vision studies conducted in virtual reality (VR) are helping to understand what roles the eye and the head play while we observe scenes that surround us. Unfortunately, very few studies have shown interest in the contributions of the rest of the body to gaze movements. To shed some light on this subject we have designed a protocol to gather tracking data about torso and leg movements, in addition to the eye and head tracking. Wearing a VR headset and trackers on their torso and leg, our participants observed scenes that were either simple (Gabor patches, 3D shapes) or complex (360 photos, 3D rooms). Stimuli were either flat on the surface of a sphere or fully 3D. Additionally, half of the trials were either free-viewing or followed by a recall task to study a potential effect of goal-direction (all trials lasted 10s). We show that under the impetus of a goal participants made longer saccades (and shorter fixations) reflecting a push to explore more within the allotted time. The head, torso, and leg drove this effect with more ample motions directed in the same direction to reach unseen parts of the VR environment. It also appears that participants used their torso and legs more while exploring complex scenes compared to simple ones. Time-course variations display stable eye movement amplitudes throughout the trial, whereas the tracked body parts start at rest and pick up amplitude over time, peaking at 5s. Our findings show that torso and leg movements are rather coarse in terms of their dynamics (e.g., absolute and relative angles of motion) and serve the purpose of exploration, much like what has been shown for the head. In contrast, eye movements represent a more fine-grained behavior, as the eyes serve to analyze the content of the field of view.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.