September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Viewpoint and Exemplar Generalization in Visual Prediction
Author Affiliations
  • Olivia Cheung
    Martinos Center, Massachusetts General Hospital, Harvard Medical School
  • Moshe Bar
    Martinos Center, Massachusetts General Hospital, Harvard Medical School
Journal of Vision September 2011, Vol.11, 858. doi:10.1167/11.11.858
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Olivia Cheung, Moshe Bar; Viewpoint and Exemplar Generalization in Visual Prediction. Journal of Vision 2011;11(11):858. doi: 10.1167/11.11.858.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Efficient visual recognition appears to be facilitated by integration of top-down and bottom-up processes. According to a top-down facilitation model (Bar et al., 2006), low spatial frequencies (LSF) are rapidly extracted from early visual areas and projected to the orbitofrontal cortex to generate top-down predictions about potential object identity. Because top-down predictions are proposed to be derived from LSF, it is hypothesized that the same predictions may be activated for inputs that differ to some extent in appearance, such as objects that are viewed from different orientations, or different exemplars from a category. Here we examined how LSF and high spatial frequencies (HSF) facilitate recognition by manipulating viewpoint and similarity using a repetition-priming paradigm. The briefly presented (30-150ms) prime object was either LSF- or HSF-filtered, followed by a mask, then an intact target object. RT for target recognition was faster when the prime and target showed identical compared with different instances, in both LSF and HSF conditions. While the priming effects increased with longer exposure duration of the prime, the magnitude of the effects was comparable across depth rotations (up to 60°) at all time points, suggesting that multiple representations of objects across viewpoint may be triggered during early processing (Experiment 1). Experiment 2 revealed comparable priming for LSF when the prime and target showed the same item or a similar exemplar from the same category but not when the two exemplars had distinct visual appearance. For HSF, however, stronger priming was obtained for the exact item than for a visually similar item. Consistent with the top-down model, these results suggest that while both LSF and HSF may support viewpoint-general representations during initial processing, LSF is critical in activating a small set of probable interpretations of the input that may fit multiple similar objects and/or objects seen from multiple viewpoints.

This work was supported by NIH grant 1R01EY019477-01 and DARPA grant N10AP20036. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×