August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Quantifying boundary extension in scenes
Author Affiliations
  • Krista A. Ehinger
    Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology
  • Ruth Rosenholtz
    Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology\nDepartment of Brain & Cognitive Sciences, Massachusetts Institute of Technology
Journal of Vision August 2012, Vol.12, 1072. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Krista A. Ehinger, Ruth Rosenholtz; Quantifying boundary extension in scenes. Journal of Vision 2012;12(9):1072.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

After viewing a picture of a scene, people remember having seen a wider-angle view than was originally presented. This phenomenon, known as "boundary extension" (Intraub & Richardson, 1989), is very robust across viewing contexts, but few studies have attempted to quantify the magnitude of this effect. In this study, we investigate the magnitude of boundary extension produced by images at various levels of zoom, and examine the roles of foreground objects and background textures in this effect. We ran two boundary extension experiments using two different image sets. Training sets were created by cropping the images so that the central object would fill a particular proportion of the image height (ranging from 45% to 90%). Participants viewed a stream of training images with a particular level of zoom and were asked to remember the images in as much detail as possible. At the end of the sequence, participants were shown the same images again in an interactive window, and they were asked to zoom in or out on each test image to recreate the view which they had previously seen. We obtained a consistent boundary extension effect: across image sets and conditions, people generated wider-angle views in which the central object was about 5% smaller than it had been in the original image. We ran another experiment in which, rather than zooming the entire image to produce the remembered view, participants were asked to manipulate the central object or the background independently. When asked to resize the central object, participants mimicked boundary extension by making the central object about 6% smaller than in the original. However, they showed the opposite effect when manipulating the background: they chose a view in which the background texture was about 6% larger than it had been in the original.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.