Abstract
Perception is dominated by the sharper of two features when the features are superimposed spatially or combined neurally in dichoptic viewing. However in natural viewing blur often varies across the image because of factors such as limited depth of focus. We asked how the average blur is processed in images with spatial variations in blur. Images corresponded to an ensemble of local edges with varying levels of Gaussian blur, or a matrix of textures (pebbles) with subregions blurred or sharpened by varying the slope of the amplitude spectrum. With the local edges, subjects judged the average blur in the image by varying the blur level of a matching stimulus using a 2AFC staircase. Three different ensembles with low, moderate, and high average blur levels were tested. In this task, subjects' estimates of the average blur in ensembles reliably tracked the mean blur level in the array, and did not significantly differ from the mean. With the texture, blur in a test image was adjusted with a staircase to estimate the level at which the image appeared in focus. These judgements were repeated after adapting to different spatial ensembles of blur variations or to an image with the same average level of uniform blur. Four ensembles with sharp, focus, moderate and high average blur were tested. Adaptation aftereffects were similar for uniform or varying blur patterns with the same mean, and like the previous task, reliably tracked the average blur level. Thus, participants can reliably estimate the average blur level in a scene composed of spatially varying blur. Moreover, adaptation to the blur is determined by the average. This suggests that in the perception of blur there is not a bias toward sharper features when the component blur levels are spatially distinct.
Meeting abstract presented at VSS 2018