December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Feature tracking counteracts illusory non-rigidities from motion-energy
Author Affiliations
  • Akihito Maruya
    Graduate Center for Vision Research, State University of New York, New York, USA
  • Qasim Zaidi
    Samford University, Birmingham, AL, USA
Journal of Vision December 2022, Vol.22, 4256. doi:https://doi.org/10.1167/jov.22.14.4256
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Akihito Maruya, Qasim Zaidi; Feature tracking counteracts illusory non-rigidities from motion-energy. Journal of Vision 2022;22(14):4256. https://doi.org/10.1167/jov.22.14.4256.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How does the brain make objects appear rigid when projected retinal images are deformed non-rigidly by object or observer motion? We used rotating rigid objects that appear rigid or non-rigid depending on speed, to test whether tracking of salient features counteracts nonrigidity. When two circular rings are rigidly linked at an angle and rotated around an axis oblique to both, they appear as if they’re rolling and wobbling. Window displays and movies have used this illusion. Using arrays of MT component cells, we show that despite the object being physically rigid, the pre-dominant motion energy at the contours of the rings supports a percept of disconnection and wobble. Forced-choices between the link being rigidly connected or not (10 observers), revealed non-rigid percepts at moderate speeds (6.0 deg/sec) but rigid percepts at slow speeds (0.6 deg/sec). If the link is painted or replaced by a gap, or if the rings are polygons with vertices, the rings appear rigidly rotating at 6.0 deg/sec. Phenomenologically, the motion of painted segments, gaps, or vertices provides cues for rotation and against wobbling. These salient features can be tracked by arrays of MT pattern-motion cells or explicit feature-tracking. At high speeds (60 deg/sec), all configurations appear non-rigid. Salient feature-tracking thus contributes to rigidity at slow and moderate speeds, but not at high speeds. We trained a convolutional neural network on motion flows to distinguish between wobbling and rotation. Flows from MT component cells to the trained CNN give a high probability of wobbling, whereas flows from feature tracking give a high probability of rotation. A prior for wobbling for different shapes was simulated by the Blender physics engine. A generative model that gives lower weight to the CNN motion energy output at slow speed and higher at fast, combined with the prior, qualitatively explains observers’ percepts.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×