Abstract
In order to make sense of complex and ambiguous visual input, the visual system makes use of prior knowledge, or assumptions about the structure of the world. The use of these ‘priors’ is neatly incorporated into a Bayesian framework, which has been successfully employed to model many aspects of human visual perception. Priors are usually assumed to be based on observers' previous experience and the statistics of natural scenes. Little research, however, has examined how these priors are formed and adapted or how general or context-specific they are. Here we consider the ‘light from above’ prior that is used by the visual system to extract shape from shading. Observers viewed monocular disks with shading gradients at various orientations. The reported shape (convex or concave) as a function of stimulus orientation was used to recover the observer's assumed light position. During a training phase, observers could also ‘touch’ the disks. The stimulus orientations which were presented as haptically convex were consistent with a light source ±30? from the observer's original assumed light position. Following the training, observers again judged the stimulus shape from purely visual information. In a control experiment, observers made lightness judgements of a mach-card type stimulus, before and after haptic training with the concave / convex disk stimuli. Firstly, our results confirm that observers assume a light position that is roughly overhead. Secondly, we found that haptic information can disambiguate the shading cue. Thirdly, using haptic feedback, observers were trained to use a slightly shifted light direction for their prior. Finally, the shift in prior light source direction was not specific to the trained task, but affected visual perception in a separate lightness judgement task.
Supported by the Wellcome Trust (WJA & EWG), NSF (EWG)