August 2014
Volume 14, Issue 10
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Motions of Parts and Wholes: An Exogenous Reference-Frame Model of Non-Retinotopic Processing
Author Affiliations
  • Aaron Clarke
  • Haluk Öğmen
    Dept. of Electrical & Computer Engineering, University of Houston
  • Michael Herzog
Journal of Vision August 2014, Vol.14, 466. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aaron Clarke, Haluk Öğmen, Michael Herzog; Motions of Parts and Wholes: An Exogenous Reference-Frame Model of Non-Retinotopic Processing. Journal of Vision 2014;14(10):466. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Object parts are seen relative to their object. For example, the reflector on a moving bicycle wheel appears to follow a circular path orbiting the wheel's center. It is almost impossible to perceive the reflector's "true" retinotopic motion, which is a cycloid. The visual system discounts the bicycle motion from the reflector motion in a similar way to how eye movements are discounted from saccadic shifts. With reflectors, however, no efference copy is available. In addition, the visual system needs to create an exogenous reference-frame for each bicycle. Relativity of motion cannot easily be explained by classical motion models because they can only pick out retinotopic motion. Here, we show how a two-stage model, based on vector fields, can explain relativity of motion. The image is first segmented into objects and their parts (e.g. bicycles, reflectors, etc.) using association fields. Motion is computed for each object (e.g., bicycle and reflector motions) using standard motion detectors. Ambiguous correspondence matches are resolved using an autoassociative neural network (Dawson, 1991). Next, the motion vectors are grouped into local manifolds using grouping cues such as proximity and common fate (resulting in all the motion vectors from one bicycle and its parts being grouped together). Within each group, the common motion vector is, then, subtracted from the individual motion vectors (e.g., the bicycle motion is subtracted from the motion of its reflectors). Thus, the model tracks the bicycle and its reflectors across time, discounting for the bicycle's overall motion. We test our model on several benchmarks, including the non-retinotopic motion perception in the Ternus-Pikler Display. Our model clearly outperforms all past models that either lack image segmentation or that apply only a single spatio-temporal filtering stage and thus fail to put object parts into a motion-based exogenous reference frame.

Meeting abstract presented at VSS 2014


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.