August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
An improved model for contour completion in V1 using learned feature correlation statistics
Author Affiliations
  • Vadas Gintautas
    Los Alamos National Laboratory
  • Benjamin Kunsberg
    Yale University
  • Michael Ham
    Los Alamos National Laboratory
  • Shawn Barr
    New Mexico Consortium
  • Steven Zucker
    Yale University
  • Steven Brumby
    Los Alamos National Laboratory
  • Luis M A Bettencourt
    Los Alamos National Laboratory
  • Garrett T Kenyon
    Los Alamos National Laboratory
    New Mexico Consortium
Journal of Vision August 2010, Vol.10, 1162. doi:10.1167/10.7.1162
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vadas Gintautas, Benjamin Kunsberg, Michael Ham, Shawn Barr, Steven Zucker, Steven Brumby, Luis M A Bettencourt, Garrett T Kenyon; An improved model for contour completion in V1 using learned feature correlation statistics. Journal of Vision 2010;10(7):1162. doi: 10.1167/10.7.1162.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How to establish standards for comparing human and cortically-inspired computer model performance in visual tasks remains largely an open question. Existing standard image classification datasets have several critical shortcomings: 1) Limitations in image resolution and number of images during set creation; 2) Reference to semantic knowledge, such as the definition of “animal,” and 3) Non-parametric complexity or difficulty. To address these shortcomings, we developed a new synthetic dataset, consisting of line segments that can form closed contours in 2D (“amoebas”).

An “amoeba” is a deformed, segmented circle, in which the radius varies with polar angle. Small gaps between segments are preserved so that the contour is not strictly closed. To create a distractor “no-amoeba” image, an amoeba image is divided into boxes of random size, which are rotated through random angles so that their continuity no longer forms a smooth closed object. Randomly superimposed no-amoeba images serve as background clutter. This dataset is not limited in size, relies on no explicit outside knowledge, has tunable parameters so that the difficulty can be varied, and lends itself naturally to a binary object classification task (“amoeba/no-amoeba”) designed to be pop-out for humans.

We show that humans display high accuracy (>90%) for this task in psychophysics experiments, even at short stimulus onset asynchrony=50 ms. Existing feed-forward computer vision models such as HMAX perform close to chance (50–60%). We present a model for V1 lateral interactions that is biologically motivated and significantly improves performance. The model uses relaxation labeling, where support between edge receptors is based on statistics of pair wise correlations learned from coherent objects, but not incoherent segment noise. We compare the effectiveness of this approach to existing computer vision models as well as to human psychophysics performance, and explore the applicability of this approach to contour completion in natural images.

Gintautas, V. Kunsberg, B. Ham, M. Barr, S. Zucker, S. Brumby, S. Bettencourt, L. M A. Kenyon, G. T (2010). An improved model for contour completion in V1 using learned feature correlation statistics [Abstract]. Journal of Vision, 10(7):1162, 1162a, http://www.journalofvision.org/content/10/7/1162, doi:10.1167/10.7.1162. [CrossRef]
Footnotes
 NSF, Los Alamos LDRD-DR.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×