Purchase this article with an account.
Elliot D. Freeman, Jon Driver; Selection of specific subjective states via contextual disambiguation in structure-from-motion. Journal of Vision 2006;6(6):265. doi: https://doi.org/10.1167/6.6.265.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We demonstrate how specific subjective rotational states can be switched at arbitrary intervals by reversing the objective rotation of a physically disambiguated context stimulus.
We used random-dot kinematograms (RDK's) to simulate 3D objects rotating in depth. In the absence of additional depth cues to specify the relative ordering of surfaces, such objects appear to switch direction of rotation spontaneously.
This ambiguity may be resolved, however, by introducing a physically-biased Context stimulus to the display. We compared two methods of physically biasing the Context, by introducing either luminance differences, or binocular disparity differences between dots on different surfaces so that one surface would consistently appear in front. In both cases the subjective state of an unbiased co-axial Test stimulus becomes closely synchronized with the objective state of the Context.
This disambiguating context effect is long-range, reaching across a sizable visual chasm between non-contiguous surfaces of Test and Context stimuli. This extends the findings of Fang & He (2004, Curr. Biol. 14, 247–251), but rules out a mechanism based on propagation of local disparity codes. The context is most effective when rendered unambiguous by disparity. This weighs against a recent model (e.g. Grossmann & Dobbins, 2003, Vision Res. 43, 359–369) that predicts it should fail, as it does with extreme luminance modulations. This phenomenon provides a method for remotely controlling subjective states without changing the local stimulus, and helps to further characterize the mechanisms of contextual disambiguation.
This PDF is available to Subscribers Only