Abstract
Observers can extract the translational motion of Gabor arrays (static Gaussian-windowed drifting sine gratings) when the velocities of the individual Gabors are consistent with a global solution (Amano et al., 2009; doi:10.1167/9.3.4). The ambiguity in the motion of the Gabors (aperture problem) is overcome by pooling over space and orientation. We have shown that observers can perform a similar disambiguation for rotating and expanding stimuli where a large-field pooling algorithm for computing global translation would be uninformative (Rider and Johnston, 2008, ECVP). Models of global complex motion encoding typically involve three stages: local motion extraction, pooling to provide unambiguous 2D estimates of local motion and a third stage that uses these estimates to calculate the global complex motion percept. We developed a novel stimulus that is theoretically ambiguous at all three stages. The orientations of an array of Gabors are chosen to be orthogonal to their position vector relative to the centre of the array and hence form concentric ring patterns. The drift speeds are then set to be consistent with a rigid translation, but this means the arrays are also consistent with an infinite number of rotations. Subjects were shown these arrays in a number of positions in the visual field and adjusted the motion of a surrounding array of plaid patches to match the perceived motion of the Gabor array. We found that the stimuli were perceived as translating, rotating clockwise or rotating anticlockwise depending on their position in the visual field, although conventional models predict translation only. We propose an explanation in which local 1D motion estimates are used directly in computing the global rotation without being locally disambiguated. This implies a novel mechanism for the aperture problem solution that uses global rotation templates.
Acknowledgement: Supported by the EPSRC and BBSRC.