August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Position-variant perception of a novel ambiguous motion field
Author Affiliations & Notes
  • Andrew Rider
    Cognitive, Perceptual and Brain Sciences. University College London.
    CoMPLEX, University College London.
  • Alan Johnston
    Cognitive, Perceptual and Brain Sciences. University College London.
    CoMPLEX, University College London.
  • Shin'ya Nishida
    NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation
Journal of Vision August 2010, Vol.10, 850. doi:https://doi.org/10.1167/10.7.850
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Andrew Rider, Alan Johnston, Shin'ya Nishida; Position-variant perception of a novel ambiguous motion field. Journal of Vision 2010;10(7):850. https://doi.org/10.1167/10.7.850.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Observers can extract the translational motion of Gabor arrays (static Gaussian-windowed drifting sine gratings) when the velocities of the individual Gabors are consistent with a global solution (Amano et al., 2009; doi:10.1167/9.3.4). The ambiguity in the motion of the Gabors (aperture problem) is overcome by pooling over space and orientation. We have shown that observers can perform a similar disambiguation for rotating and expanding stimuli where a large-field pooling algorithm for computing global translation would be uninformative (Rider and Johnston, 2008, ECVP). Models of global complex motion encoding typically involve three stages: local motion extraction, pooling to provide unambiguous 2D estimates of local motion and a third stage that uses these estimates to calculate the global complex motion percept. We developed a novel stimulus that is theoretically ambiguous at all three stages. The orientations of an array of Gabors are chosen to be orthogonal to their position vector relative to the centre of the array and hence form concentric ring patterns. The drift speeds are then set to be consistent with a rigid translation, but this means the arrays are also consistent with an infinite number of rotations. Subjects were shown these arrays in a number of positions in the visual field and adjusted the motion of a surrounding array of plaid patches to match the perceived motion of the Gabor array. We found that the stimuli were perceived as translating, rotating clockwise or rotating anticlockwise depending on their position in the visual field, although conventional models predict translation only. We propose an explanation in which local 1D motion estimates are used directly in computing the global rotation without being locally disambiguated. This implies a novel mechanism for the aperture problem solution that uses global rotation templates.

Acknowledgement: Supported by the EPSRC and BBSRC.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×