Abstract
People can categorize the same object at different levels of abstraction (i.e., superordinate, basic, and subordinate). Of these, there is a bias to the basic level. However, the origin of the bias to the basic level remains unsettled. For categorization researchers, the bias is due to the organization of categories in memory producing faster access to the basic level (Murphy, 1991). For recognition researchers the bias arises because the visual system primarily extracts parts, and these parts are utilized to represent categories at the basic level in memory (Biederman, 1987). Here, we test a third alternative: A basic-level bias could naturally arise from a bias on the distribution of perceptually available visual cues. Namely, basic-level categorizations could be invariant to scale, whereas subordinate categorizations could depend on scale.
In Experiment 1, 20 observers learned 2 exemplars of 8 species of 3D rendered, shaded animals varying in size (6 sizes spanning 12, 6, 3, 1.5, .75, .38 deg of visual angle on the screen). In a verification task, an animal name (either subordinate or basic) was followed by a low-contrast version of the animals with a level of Gaussian additive white noise adjusted to maintain performance at 75% correct. Computations of d′ for each size revealed basic and subordinate slope differences: d′ slopes for the basic-level rose quickly to reach ceiling levels, indicating a relative independence to scale, whereas subordinate-level slopes rose slowly, never reaching ceiling, indicating a dependence on scale. To assign this difference in performance to specific scale cues, we adapted Bubbles (Gosselin and Schyns, 2001) to randomly destroy the phase information of a 2D Fourier Transform of the animal stimuli whilst keep the contrast energy constant. The resulting profile of the use of scale information reveals the expected bias for fine scale cues at the subordinate level in contrast to the basic level which is scale invariant.