July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Predicting the effects of illumination in shape from shading
Author Affiliations
  • Roland Fleming
    Experimental Psychology, University of Giessen, Germany
  • Romain Vergne
    Experimental Psychology, University of Giessen, Germany\nMaverick, INRIA Grenoble-Rhône-Alpes and LJK (University of Grenoble and CNRS), France
  • Steven Zucker
    Computer Science, Yale University, New Haven, CT
Journal of Vision July 2013, Vol.13, 611. doi:https://doi.org/10.1167/13.9.611
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Roland Fleming, Romain Vergne, Steven Zucker; Predicting the effects of illumination in shape from shading. Journal of Vision 2013;13(9):611. https://doi.org/10.1167/13.9.611.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Shading depends on different interactions between surface geometry and lighting. Under collimated illumination, shading is dominated by the ‘direct’ term, in which image intensities vary with the angle between surface normals and light sources. Diffuse illumination, by contrast, is dominated by ‘vignetting effects’ in which image intensities vary with the degree of self-occlusion (the proportion of incoming direction that each surface point ‘sees’). These two types of shading thus lead to very different intensity patterns, which raises the question of whether shading inferences are based directly on image intensities. We show here that the visual system uses 2D orientation signals (‘orientation fields’) to estimate shape, rather than raw image intensities and an estimate of the illuminant. We rendered objects under varying illumination directions designed to maximize the effects of illumination on the image. We then passed these images through monotonic, non-linear intensity transfer functions to decouple luminance information from orientation information, thereby placing the two signals in conflict. In Task 1 subjects adjusted the 3D shape of match objects to report the illusory effects of changes of illumination direction on perceived shape. In Task 2 subjects reported which of a pair of points on the surface appeared nearer in depth. They also reported perceived illumination directions for all stimuli. We find that the substantial misperceptions of shape are well predicted by orientation fields, and poorly predicted by luminance-based shape from shading. For the untransformed images illumination could be estimated accurately, but not for the transformed images. Thus shape perception was, for these examples, independent of the ability to estimate the lighting. Together these findings support neurophysiological estimates of shape from the responses of orientation selective cell populations, irrespective of the illumination conditions.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×