Abstract
Camouflage is an impressive feat of biology in which an animal’s surface evolves to match the reflectance and texture of the backgrounds against which it typically appears. Equally impressive is the ability of visual systems to detect such camouflage. We present a principled theory of camouflage detection based on task-relevant cues and biologically plausible visual computations. This theory is informed by a series of psychophysical experiments where we measured human ability to detect maximally-camouflaged targets: ones whose texture is a random sample of the background texture. The amplitude spectra of natural images fall inversely with spatial frequency raised to an exponent, which varies from approximately 0.7 to 1.5 (this represents the degree of spatial correlation in the image). To see how this would influence camouflage detection, we measured detection on Gaussian noise textures with different falloff exponents. For 1 degree targets in the fovea, we find that humans are about 75% correct for an exponent of 0.7, and almost 100% correct for exponents above 1. We also find that performance degrades substantially in the periphery: e.g. at 12 degree eccentricity, humans only reach 75% correct when the exponent is 1.5. So interestingly, humans cannot detect maximally-camouflaged targets for exponents just below the range that occurs in nature. In other experiments, we measured camouflage detection on a variety of naturalistic textures, and also as a function of the complexity of the target shape. As a starting point, we focus our detection theory on only the information available at or near the target-background edge, so we exclude textures with strong long-range patterns that give away additional cues. We find that a principled model that includes edge-element detection at multiple scales, edge-element grouping, weak signal suppression and decision noise, can account for many aspects of our parametric experimental measurements.