August 2009
Volume 9, Issue 8
Vision Sciences Society Annual Meeting Abstract  |   August 2009
Visual routines for sketches: A computational model
Author Affiliations
  • Andrew Lovett
    Qualitative Reasoning Group, Northwestern University
  • Kenneth Forbus
    Qualitative Reasoning Group, Northwestern University
Journal of Vision August 2009, Vol.9, 201. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Andrew Lovett, Kenneth Forbus; Visual routines for sketches: A computational model. Journal of Vision 2009;9(8):201.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

We present Visual Routines for Sketches (VRS), a system which is being developed to compute symbolic, qualitative representations of sketches drawn by users. VRS models early to mid-level human vision in a simple line-drawing environment. Its primary purpose is to provide the user with a set of elementary operations that can be combined to construct visual routines, i.e., programs that extract some symbolic information about a sketch, as described in Ullman's (1984) seminal paper. Elementary operations include the spreading of covert attention through curve tracing and region coloring, and inhibiting or locating elements in a visual scene that contain a particular basic feature, such as a color or orientation.

The strength of VRS over other visual routine implementations is that rather than using a small set of routines to perform a particular task, VRS provides a wide open environment, in which the user can combine operations to create any number of routines, depending on the desired information about the sketch. This approach has two key advantages: a) there is a great deal of flexibility in what information can be computed, and thus the system can compute representations that serve as the input for many different visuospatial tasks; and b) the system can serve as a sandbox in which to evaluate and compare different computational models for how people compute visual features and spatial relations. In particular, we focus on two types of two-dimensional relations: positional relations and topological relations. We show how simple routines can be written in VRS to compute these relations, and how the output of VRS can be used to evaluate those routines as models of human perceptual processing.

Lovett, A. Forbus, K. (2009). Visual routines for sketches: A computational model [Abstract]. Journal of Vision, 9(8):201, 201a,, doi:10.1167/9.8.201. [CrossRef]
 This work is supported by NSF SLC Grant SBE-0541957, the Spatial Intelligence and Learning Center (SILC).

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.