Abstract
Humans readily perceive the structure in oriented textures, e.g., Glass patterns. Here we seek to understand the computational factors that underlie their detection. In a series of experiments, random-dot patterns were displayed briefly (165 ms) within the central 14 deg of the visual field. Each stimulus consisted of 200 dipoles distributed over a discrete number of orientations. On noise-only trials, dipoles were uniformly distributed over all orientations. On signal-plus-noise trials, a portion of the 200 dipoles were oriented in a common direction, and the remaining were oriented randomly. Detection thresholds were estimated by varying the proportion of signal dipoles using the QUEST procedure in a Yes/No task with feedback. We derived an ideal observer for this task, and found that human efficiency for dipole texture detection is roughly 1%. We then considered a model observer that is ideal except for three types of inefficiency: 1) false matches between dipole dots (correspondence errors), 2) orientation uncertainty and bias, and 3) decline in sensitivity with eccentricity. By comparing detection performance for oriented textures based on dipole dots with performance for oriented line segments, we estimated that false matches reduce efficiency by a factor of approximately 3. Using a classification image technique in the orientation domain, we estimated observer bias and uncertainty in detecting elements at the signal orientation. Incorporating the estimated orientation uncertainty and bias into our model accounts for an additional factor of 5 reduction in efficiency. We used the same classification image technique to model the decrease in sensitivity with eccentricity, finding that this factor accounts for an additional factor of 6 reduction in efficiency. Our results suggested that the three factors: correspondence errors, orientation uncertainty, and decline in sensitivity with eccentricity, account for roughly 90% of the perceptual losses in the detection of oriented textures such as Glass patterns.
This work is supported by NSERC, CIHR and PREA.