October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Head jitter enhances motion-in-depth perception
Author Affiliations & Notes
  • Jacqueline M. Fulvio
    University of Wisconsin - Madison
  • Bas Rokers
    New York University - Abu Dhabi
  • Footnotes
    Acknowledgements  Facebook Reality and Google Daydream
Journal of Vision October 2020, Vol.20, 391. doi:https://doi.org/10.1167/jov.20.11.391
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jacqueline M. Fulvio, Bas Rokers; Head jitter enhances motion-in-depth perception. Journal of Vision 2020;20(11):391. https://doi.org/10.1167/jov.20.11.391.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Motion-in-depth perception relies on multiple sensory cues. Previous work has quantified the contribution of binocular motion cues, i.e. interocular velocity differences and changing disparities over time, as well as monocular motion cues, i.e. size and density changes. However, even when these cues are presented in concert, observers will systematically misreport the direction of motion-in-depth stimuli. Here we considered the potential role of small involuntary head movements, i.e. head jitter, in motion-in-depth perception. We first measured head jitter under fixating but head-free viewing conditions. Spectral densities for all three head movement axes exhibited a pink (1/f) noise pattern, consistent with random drift, rather than voluntary control. Head translations and rotations were ~12 mm/s and ~2.5 deg/s on average, respectively. While small, the resulting retinal motion signals were above perceptual threshold. We subsequently investigated the impact of head-jitter on motion-in-depth perception using virtual reality. Observers reported motion-in-depth of a 3D target under head-free viewing while head tracking was on, off, or delayed either randomly or uniformly. Providing head-jitter-induced retinal signals (“on”) increased sensitivity and reduced bias of motion-in-depth perception. Increasing random variability in head-movement-to-photon latency (“delayed”) reduced sensitivity and produced biases comparable to when head tracking was turned off altogether. Furthermore, uniform delays in motion-to-photon latency also reduced performance. Thus the retinal signals produced by head jitter enhanced motion-in-depth perception, provided that they were (1) consistent, and (2) low-latency. These results suggest that head-restrained viewing typical in psychophysical experiments eliminates cues critical to motion-in-depth perception and underestimates perceptual sensitivity. Similarly, in addition to the well-established role of monocular and binocular motion cues, other cues rarely considered in traditional motion perception experiments, such as accommodative blur and lighting may serve critical roles in the accurate perception of motion-in-depth.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×