December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Causal inference underlies hierarchical motion perception
Author Affiliations & Notes
  • Sabyasachi Shivkumar
    Brain and Cognitive Sciences, University of Rochester
    Center for Visual Science, University of Rochester
  • Boris Penaloza
    Brain and Cognitive Sciences, University of Rochester
    Center for Visual Science, University of Rochester
  • Gabor Lengyel
    Brain and Cognitive Sciences, University of Rochester
    Center for Visual Science, University of Rochester
  • Gregory C. DeAngelis
    Brain and Cognitive Sciences, University of Rochester
    Center for Visual Science, University of Rochester
  • Ralf M. Haefner
    Brain and Cognitive Sciences, University of Rochester
    Center for Visual Science, University of Rochester
  • Footnotes
    Acknowledgements  This work was supported by NIH U19NS118246
Journal of Vision December 2022, Vol.22, 4017. doi:https://doi.org/10.1167/jov.22.14.4017
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sabyasachi Shivkumar, Boris Penaloza, Gabor Lengyel, Gregory C. DeAngelis, Ralf M. Haefner; Causal inference underlies hierarchical motion perception. Journal of Vision 2022;22(14):4017. https://doi.org/10.1167/jov.22.14.4017.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Perception of object motion is affected by the motion of other objects in the scene. Prior work (Gershman et al. 2016, Shivkumar et al. 2020, Bill et al. 2020,2021) formalized this process as Bayesian causal inference (Kording et al. 2007). A key signature of this process is the transition from integrating to segmenting motion signals as their differences increase. To quantitatively test the causal inference predictions and constrain model parameters, we designed a new motion estimation task that overcomes two shortcomings of prior experiments. First, in our design, perceiving retinal motion vs integrating motion signals is reflected in strikingly different behavioral reports. Second, it probes the perception of motion of an object relative to a larger object that itself is perceived to move relative to an even larger object. The stimulus consisted of moving dots (target) surrounded by two concentric rings of moving dots. Human observers (n=10) reported the perceived target direction using a dial. The motion direction of the inner ring was clockwise to the target and its difference was varied to probe the transition from the integration to the segmentation of target and inner ring motion. Importantly, the motion of the outer ring was always counterclockwise to the target. Consequently, observers integrating the target and inner ring were predicted to have a clockwise bias in motion perception, while segmenting the target from the inner ring predicted a counterclockwise bias. Our results clearly support these predictions. In an additional experiment, we verified a critical prediction of our model characterizing the transition from integration to segmentation depending on the uncertainty about the target motion. The fitted model parameters may help characterize causal inference across different clinical populations (Noel et al. 2021). Our model can also make predictions for the influence of surround on MT neurons responses (Born et al. 2005).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×