Abstract
Novices generally classify objects faster at the basic level than at a subordinate level of abstraction. However, experts can categorize equally fast at both levels (Tanaka & Taylor, 1991). Familiar faces can be categorized as accurately at the basic level (“face”) as at the subordinate/identity (“Tom Cruise”) level, even at short exposure durations (Tanaka, 2002). Are there sequential stages for categorizing objects at the basic and then the subordinate level but parallel categorization of faces at both levels? Or are there just differences in processing efficiency between objects and faces? We precisely examined the time-course of face and object categorization with a signal-to-respond procedure. In a category verification task, subjects first saw a basic- or subordinate-level label (“DOG” or “BEAGLE”), and then verified if an object (faces, dogs, or birds) matched the label. Objects were presented for a variable duration (13ms–1664ms) and were pre- and post-masked. Subjects were required to respond immediately after a response signal at onset of the post-mask. At long durations (e.g., 1664ms), basic and subordinate-level categorizations reached ceiling. At intermediate exposure durations (e.g., 416ms), categorization of dogs and birds (but not faces) at the subordinate level was worse than at the basic level. But at short exposure durations (<104ms), categorization of faces at the subordinate level was worse than at the basic level. This suggests that, even for face experts, categorization at the subordinate level is not as efficient as categorization at the basic level, although face categorization is more efficient than categorization of other objects irrespective of the level of abstraction. Moreover, for all three object categories, the time at which categorization performance increased above chance level was identical for the basic and subordinate levels. This finding argues against sequential stages for basic- and subordinate-level object categorization.
This research is supported by a grant from the James S. McDonnell Foundation to the Perceptual Expertise Network, and by NSF grant BCS-9910756 and NIH grant MH61370 to TJP.