August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Modeling Spatiotemporal Boundary Formation
Author Affiliations
  • Philip Kellman
    Department of Psychology, University of California, Los Angeles
  • Gennady Erlikhman
    Department of Psychology, University of California, Los Angeles
  • Max Mansolf
    Department of Psychology, University of California, Los Angeles
  • Renato Fillinich
    Emory University
  • Ariella Iancu
    City University London
Journal of Vision August 2012, Vol.12, 881. doi:https://doi.org/10.1167/12.9.881
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Philip Kellman, Gennady Erlikhman, Max Mansolf, Renato Fillinich, Ariella Iancu; Modeling Spatiotemporal Boundary Formation. Journal of Vision 2012;12(9):881. https://doi.org/10.1167/12.9.881.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Spatiotemporal Boundary Formation (SBF) is the formation of illusory boundaries and completed shapes from spatially separated, local element transformations (Shipley & Kellman, 1994, 1996). We tested a computational model of SBF based on a proof that local edge orientation and global motion could be derived from three non-collinear, sequential, local events (such as element disappearance, color change, local motion, or local orientation change). We hypothesized that due to noise in registration of key inputs, the ideal model would exceed human performance. In three experiments, we measured orientation discrimination thresholds for edges defined by SBF as a function of element quantity (Exp. 1), element density (Exp. 2) and rate of element change (Exp. 3). In all three experiments, black circular elements on a white background disappeared whenever they came into contact with an illusory edge and reappeared when the edge moved beyond. Human performance was inferior to the ideal model. We developed a more realistic model by incorporating two kinds of noise: noise in registration of the relative positions of the elements, and noise in the velocity of the virtual object. Estimates were obtained by fitting the model to data from one condition. The improved model predicted average thresholds with high precision in the first two experiments and not well in the third. We suspect that this is due to minor, but important differences in display generation across the experiments. The model produced estimates of edge orientation from the positions and rates of transformation of elements in the display for each trial. The model’s threshold estimates were then derived by submitting these results to the same staircase procedure used for human observers. These results offer a plausible account of how local element changes are used by the visual system to produce object boundaries, shape and global motion in SBF.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×