Abstract
Efficient visual recognition appears to be facilitated by integration of top-down and bottom-up processes. According to a top-down facilitation model (Bar et al., 2006), low spatial frequencies (LSF) are rapidly extracted from early visual areas and projected to the orbitofrontal cortex to generate top-down predictions about potential object identity. Because top-down predictions are proposed to be derived from LSF, it is hypothesized that the same predictions may be activated for inputs that differ to some extent in appearance, such as objects that are viewed from different orientations, or different exemplars from a category. Here we examined how LSF and high spatial frequencies (HSF) facilitate recognition by manipulating viewpoint and similarity using a repetition-priming paradigm. The briefly presented (30-150ms) prime object was either LSF- or HSF-filtered, followed by a mask, then an intact target object. RT for target recognition was faster when the prime and target showed identical compared with different instances, in both LSF and HSF conditions. While the priming effects increased with longer exposure duration of the prime, the magnitude of the effects was comparable across depth rotations (up to 60°) at all time points, suggesting that multiple representations of objects across viewpoint may be triggered during early processing (Experiment 1). Experiment 2 revealed comparable priming for LSF when the prime and target showed the same item or a similar exemplar from the same category but not when the two exemplars had distinct visual appearance. For HSF, however, stronger priming was obtained for the exact item than for a visually similar item. Consistent with the top-down model, these results suggest that while both LSF and HSF may support viewpoint-general representations during initial processing, LSF is critical in activating a small set of probable interpretations of the input that may fit multiple similar objects and/or objects seen from multiple viewpoints.
This work was supported by NIH grant 1R01EY019477-01 and DARPA grant N10AP20036.