Purchase this article with an account.
Holly E. Gerhard, Laurence T. Maloney; A Model of Illumination Direction Recovery Applied to Dynamic Three-Dimensional Scenes. Journal of Vision 2010;10(7):445. doi: https://doi.org/10.1167/10.7.445.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Background: Gerhard & Maloney (ECVP 2009) measured the accuracy of human observers in judging the direction of movement of a collimated light source illuminating a simple rendered scene. The scene contained a smooth randomly-generated Gaussian bump landscape. The light source was not directly visible and observers had to judge the direction of movement given only the effect of the illumination change on the landscape. All viewing was binocular. This task is of particular interest because motion direction is ambiguous without this depth information. Eight observers completed the experiment. Observers could accurately judge the direction of movements spanning as little as 10 degrees over 750 msec but judgments were consistently less reliable for some scenes than others. Observers also varied in accuracy. Goal: We present a computational model of the task intended to predict ideal performance, individual differences across observers, and differences in accuracy across scenes. The model recovers change in illumination direction using 1) the luminance map of the stereo images across time, 2) the depth map of the scene that we assume is available to the observer. Model: The ideal model computes luminance gradients at Canny-defined luminance edges in the image and combines this information with a measure of local surface shape to recover illumination direction. To compute motion direction, final position and initial position are compared. Results: We simulated the performance of the model on the stimuli previously viewed by observers in Gerhard & Maloney. The model's variation in direction estimation was 7.2 degrees. Maximum likelihood estimates of human variability based on performance were much larger, 27 to 46 degrees. We use the model to mimic human performance for each observer and also to investigate the scene factors that enhance or diminish performance.
This PDF is available to Subscribers Only