Abstract
While objects seen under optimal visual conditions are rapidly categorized, categorizing objects under impoverished viewing conditions (e.g., unusual views, in fog, occlusion) requires more time and may depend more on top-down processing, as hypothesized by object model verification theory. Object categorization involves matching the incoming perceptual information to a stored visuostructural representation in long-term memory. Functional magnetic resonance imaging (fMRI) work found evidence for model verification theory, and implicated ventrolateral prefrontal cortex (VLPFC) and parietal lobe in top-down modulation of posterior visual processes during the categorization of impoverished images of objects. We replicated the fMRI study with event-related potentials (ERPs) to time model verification processes. The two-state interactive account of visual object knowledge predicts that top-down processes of model verification modulate object model selection processes supporting categorization during a frontopolar N350, and later categorization processes during a parietal late positive complex (LPC), but not earlier feedforward processes of perceptual categorization during a P150-N170 complex. 24 participants categorized fragmented line drawings of known objects. Impoverishment was defined by a median split of response time with slower times defining more impoverished (MI) and faster times defining less impoverished (LI) objects. As predicted, after 200 ms, the N350 was larger for MI than LI objects, whereas the LPC was smaller for MI than LI objects. Consistent with the two-state interactive account, object model selection processes supporting categorization occur after 200 ms and can be modulated by top-down processes of model verification implemented in VLPFC-parietal networks to facilitate object constancy under impoverished viewing conditions.