August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
A model of figure-ground segregation from texture accretion and deletion in random dot motion displays
Author Affiliations
  • Timothy Barnes
    Department of Cognitive and Neural Systems, Boston University
  • Ennio Mingolla
    Department of Cognitive and Neural Systems, Boston University
Journal of Vision August 2010, Vol.10, 839. doi:https://doi.org/10.1167/10.7.839
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Timothy Barnes, Ennio Mingolla; A model of figure-ground segregation from texture accretion and deletion in random dot motion displays. Journal of Vision 2010;10(7):839. https://doi.org/10.1167/10.7.839.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Accretion or deletion of texture unambiguously specifies occlusion and can produce a strong perception of depth segregation between two surfaces even in the absence of other cues. Given two abutting regions of uniform random texture with different motion velocities, one region will appear to be situated farther away and behind the other (i.e., the ground) if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities (Kaplan 1969, P&P 6(4):193–198). Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e. the figure) if the boundary is moving coherently with the moving texture. Computational studies demonstrate how V1, V2, MT, and MST can interact first to create a motion-defined boundary and then to signal texture accretion or deletion at that boundary. The model's motion system detects discontinuities in the optic flow field and modulates the strength of existing boundaries at those retinal locations. A weak speed-depth bias brings faster-moving texture regions forward in depth, which is consistent with percepts of displays containing shearing motion alone — i.e., where motion is parallel to the resulting emergent boundary between regions — in which the faster region appears closer (Royden et al. 1988, Perception 17:289–296). The model's form system completes this modulated boundary and tracks the motion of any boundaries defined by texture. The model includes a simple predictive circuit that signals occlusion when texture defined boundaries unexpectedly appear or disappear.

Barnes, T. Mingolla, E. (2010). A model of figure-ground segregation from texture accretion and deletion in random dot motion displays [Abstract]. Journal of Vision, 10(7):839, 839a, http://www.journalofvision.org/content/10/7/839, doi:10.1167/10.7.839. [CrossRef]
Footnotes
 TB and EM were supported in part by CELEST, an NSF Science of Learning Center (NSF SBE-0354378) and HRL Labs LLC (DARPA prime HR001-09-C-0011). EM was also supported in part by HP (DARPA prime HR001109-03-0001).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×