Abstract
Studies of object recognition have identified two distinct modes of object processing: part-based, which relies on discrete local features, and holistic, based on global shapes and outlines. However, visual perception research often neglects to account for the contribution of highly informative local features to recognition achieved through either processing mode. In this experiment, 20 familiar and visually diverse real-world object photographs were divided into square segments of equal size. After providing labels for each whole object stimulus, participants (N = 20) viewed stimuli as they accumulated one segment at a time at the center of a computer monitor. Each segment configuration (500 ms) was preceded by a white fixation cross (2 s) and followed by a white noise mask (300 ms). After each presentation, participants were asked to decide whether they could identify the object. If so, they were prompted to enter an identification label as provided during the preliminary naming task. A new image cue (1.5 s) then appeared on the screen and the presentation cycle began with a single segment of a new object stimulus. If the participant could not confidently identify the object, the presentation cycle repeated and the stimulus accumulated one additional segment at a time until it was identified. Each object stimulus completed 6 randomized accumulation sequences, yielding a total of 120 trials per participant. Data were analyzed to determine the modal frequency with which object segments appeared on-screen immediately preceding stimulus identification. Results revealed increased frequency for various segments across objects, suggesting that these visual regions may contain local features or properties that are relatively more informative than others. Furthermore, variation in frequency patterns observed across objects indicates that local feature saliency may be greater in certain stimuli. We predict these variations are related to holistic and parts-based processing of intact objects.
Meeting abstract presented at VSS 2016