The question motivating our study is the following: Can one do rapid category detection when the image is degraded? By the term degradation, we are referring to various ways of reducing information content in the image. Over the years, selected studies have investigated the perceptual effects of degrading images along a singular image dimension. Harmon and Julesz (
1973) and Bachmann (
1991) demonstrate that with respect to spatial resolution, only 18 × 18 pixels per face are sufficient for robust recognition, and these findings have been extended to the domains of faces, objects, and scenes by Torralba, Fergus, and Freeman (
2008) and Torralba and Sinha (
2001). Along the dimension of luminance depth, Mooney faces are a classic demonstration of visual processing working in extreme cases of luminance depth degradation (Mooney,
1957). Robustness to degradation falls under the larger umbrella of invariances. Invariance is a percept's tolerance to different transformations such as scaling, lighting, or rotation. There's been extensive study of invariance in psychology and computer science. Studies such as Tarr and Pinker (
1989), Rock, Di Vita, and Barbeito (
1981), and Yin (
1969) investigate rotation invariance in humans. In physiology, Brainard (
2004) investigates color constancy and Ito, Tamura, Fujita, and Tanaka (
1995) investigates size and position invariance. Many computational models have also been proposed to tackle different invariances such as those of Olshausen, Anderson, and Van Essen (
1993), which proposes a model that is scale and shift invariant, and those of Lowe (
1999), which proposes the SIFT descriptor, a local descriptor robust to affine transformations and lighting variation.