August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Dynamic Perception: Synergy between Grouping, Retinotopic Masking, and Non-retinotopic Feature Attribution
Author Affiliations
  • Haluk Ogmen
    Dept. of ECE, University of Houston
  • Michael Herzog
    Laboratory of Psychophysics, Brain Mind Institute, EPFL
  • Babak Noory
    Dept. of ECE, University of Houston
Journal of Vision August 2014, Vol.14, 1367. doi:https://doi.org/10.1167/14.10.1367
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Haluk Ogmen, Michael Herzog, Babak Noory; Dynamic Perception: Synergy between Grouping, Retinotopic Masking, and Non-retinotopic Feature Attribution. Journal of Vision 2014;14(10):1367. https://doi.org/10.1167/14.10.1367.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Due to visible persistence, moving objects should appear highly blurred with their features blending to those of other objects or the background. This does not occur under normal viewing conditions. We proposed that clarity of vision is achieved through a synergy between grouping, retinotopic masking, and non-retinotopic feature attribution. Here, we investigated the retinotopy of visual masking, non-retinotopic feature attribution, and their relationship to perceptual grouping. Methods: We used a radial Ternus-Pikler Display (TPD) in which the target and mask were positioned either according to retinotopic coordinates (retinotopic mask) or according to non-retinotopic grouping (non-retinotopic mask). Two ISIs were used to generate element and group motion percepts in the TPD. In experiment 1, we used a metacontrast mask that produced non-monotonic (type-B) masking. In experiment 2, we used a structure mask that produced monotonic (type-A) masking. To study feature attribution, in Experiment 3 we made the direction of the TPD predictable. In all experiments, observers kept steady fixation at the center of the display and eye movements were monitored in control experiments. Results: Retinotopic masking predicts masking effects only for retinotopic masks for both element and group motion percepts in TPD. In contrast, non-retinotopic masking predicts masking effects for retinotopic masks only for element motion percepts in TPD and masking effects for non-retinotopic masks only for group motion percepts in TPD. Our results are consistent with retinotopic masking effects for both metacontrast and structure masks and for type-A and type-B masking functions. In Experiment 3, the retinotopic mask maintained its masking effect in element motion percept but not in the group motion percept, indicating effective non-retinotopic feature attribution in the latter case. Conclusions: Our results suggest that retinotopic masking controls motion blur while non-retinotopic feature attribution allows the computation of form across space and time.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×