June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
Independent coding of object motion and position revealed by distinct perceptual time courses
Author Affiliations
  • Paul F. Bulakowski
    The Department of Psychology
  • Kami Koldewyn
    The Center for Neuroscience, and The Center for Mind and Brain, University of California, Davis, USA.
  • David Whitney
    The Department of Psychology, and The Center for Neuroscience, and The Center for mind and Brain, University of California, DavisUSA
Journal of Vision June 2006, Vol.6, 550. doi:https://doi.org/10.1167/6.6.550
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Paul F. Bulakowski, Kami Koldewyn, David Whitney; Independent coding of object motion and position revealed by distinct perceptual time courses. Journal of Vision 2006;6(6):550. https://doi.org/10.1167/6.6.550.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We perceive a coherent visual world, despite the fact that visual processing is largely parallel and modular. There are, however, cases in which this binding is broken, causing asynchronous perception of an object's features (e.g. color and motion) or conflicting judgments of its properties (e.g. motion and position). Despite these examples of misbinding, it remains unclear whether different properties of an object are coded independently. Here we used contingent motion aftereffects (MAE) (Arnold, et al., Current Biology, 2001) to psychophysically measure processing differences both between and within features of the same object. Subjects adapted to a rotating grating whose direction of rotation (alternating clockwise and counterclockwise) was paired synchronously or asynchronously with changes in spatial frequency (either low or high spatial frequency). Following adaptation, subjects judged either the perceived MAE or the perceived position shift of the test grating. If the attributes of an object are processed with the same time course, then direction reversals that are synchronously paired with the spatial frequency changes should produce the strongest spatial-frequency contingent aftereffect. For judgments of the MAE, however, we found that a physical asynchrony must be introduced between attributes to maximize the contingent MAE; the motion reversals had to lag the spatial frequency changes by approximately 90ms. On the other hand, for judgments of the test grating's position (i.e., the phase), no asynchrony was revealed. The distinct time courses for judgments of motion and position- following precisely the same physical adaptation- suggests that they are coded by separate neural populations.

Bulakowski, P. F. Koldewyn, K. Whitney, D. (2006). Independent coding of object motion and position revealed by distinct perceptual time courses [Abstract]. Journal of Vision, 6(6):550, 550a, http://journalofvision.org/6/6/550/, doi:10.1167/6.6.550. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×