August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
A Model of Illumination Direction Recovery Applied to Dynamic Three-Dimensional Scenes
Author Affiliations
  • Holly E. Gerhard
    Department of Psychology, New York University
  • Laurence T. Maloney
    Department of Psychology, New York University
    Center for Neural Science, New York University
Journal of Vision August 2010, Vol.10, 445. doi:10.1167/10.7.445
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Holly E. Gerhard, Laurence T. Maloney; A Model of Illumination Direction Recovery Applied to Dynamic Three-Dimensional Scenes. Journal of Vision 2010;10(7):445. doi: 10.1167/10.7.445.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Background: Gerhard & Maloney (ECVP 2009) measured the accuracy of human observers in judging the direction of movement of a collimated light source illuminating a simple rendered scene. The scene contained a smooth randomly-generated Gaussian bump landscape. The light source was not directly visible and observers had to judge the direction of movement given only the effect of the illumination change on the landscape. All viewing was binocular. This task is of particular interest because motion direction is ambiguous without this depth information. Eight observers completed the experiment. Observers could accurately judge the direction of movements spanning as little as 10 degrees over 750 msec but judgments were consistently less reliable for some scenes than others. Observers also varied in accuracy. Goal: We present a computational model of the task intended to predict ideal performance, individual differences across observers, and differences in accuracy across scenes. The model recovers change in illumination direction using 1) the luminance map of the stereo images across time, 2) the depth map of the scene that we assume is available to the observer. Model: The ideal model computes luminance gradients at Canny-defined luminance edges in the image and combines this information with a measure of local surface shape to recover illumination direction. To compute motion direction, final position and initial position are compared. Results: We simulated the performance of the model on the stimuli previously viewed by observers in Gerhard & Maloney. The model's variation in direction estimation was 7.2 degrees. Maximum likelihood estimates of human variability based on performance were much larger, 27 to 46 degrees. We use the model to mimic human performance for each observer and also to investigate the scene factors that enhance or diminish performance.

Gerhard, H. E. Maloney, L. T. (2010). A Model of Illumination Direction Recovery Applied to Dynamic Three-Dimensional Scenes [Abstract]. Journal of Vision, 10(7):445, 445a, http://www.journalofvision.org/content/10/7/445, doi:10.1167/10.7.445. [CrossRef]
Footnotes
 NIH EY08266.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×