Abstract
Humans' sensitivity to orientation information has been typically measured in highly artificial tasks using very simple stimuli e. g. by quantifying orientation discrimination with small Gabor patches. How well do such measures capture the efficiency of coding orientation information during object perception? We used a 2-AFC match-to-sample paradigm to determine the level of internal noise and sampling efficiency with which orientation information is represented in the visual system while subjects looked at images of objects. Natural grayscale images of everyday objects were analyzed with a bank of multi-scale Gabor-wavelets. Stimuli were created using a small subset of the coefficients of the Gabor filters so that the resulting synthetic images retained the properties of the original image. A noiseless synthetic source image was presented for 1 sec, followed by a reference image that contained a fixed amount of orientation noise (s), and a target image containing an additional amount of orientation noise (s+D) under control of a staircase procedure. There were six levels of fixed noise from 1 to 32 degrees in logarithmic steps. Subjects were asked to identify which of the simultaneously presented images were more similar to the source image. Results were analyzed by bootstrapping and 95% confidence intervals were estimated. With increased pedestal noise (s) subjects' performance strongly increased up to 16 degrees and then steeply deteriorated independently of the groups of images used. These results are in sharp contrast to orientation sensitivity results obtained with standard discrimination tasks. They also suggest that the visual system is surprisingly robust to orientation noise when processing information of structured inputs rather than simple sine wave stimuli.