Abstract
Efficient object recognition is facilitated by integration of top-down and bottom-up processes. One proposed top-down mechanism is that coarse global shapes, conveyed by low spatial frequencies (LSF), are projected rapidly from early visual areas to the orbitofrontal cortex (OFC) to facilitate subsequent processing (Bar et al., 2006). To understand the nature of object representations in the OFC, we examined how this region may utilize visual and semantic object information to generate visual predictions. We hypothesized that LSF are useful for generating possible interpretations to facilitate recognition of exemplars that share similar global shapes (e.g., collie/beagle), and less so for exemplars that share dissimilar global shapes (e.g., collie/chihuahua). In an fMRI repetition-priming paradigm, we manipulated the spatial frequency content of the primes, and the visual-semantic relations between the primes and targets. The primes could be unfiltered, or contained either LSF or high spatial frequencies (HSF). The primes and targets could be identical or different objects, or visually similar or dissimilar exemplars from the same category. Participants (N=19) judged if each unfiltered target was a real-world or nonsense object. Behavioral priming, as revealed by faster RT for identical than different objects, was observed with unfiltered and LSF, but not HSF, primes. The priming effect was also stronger for similar than dissimilar exemplars. This pattern of priming results was reflected in the activity in the ventral visual stream. Critically, the LSF-driven facilitation also revealed differential activations among the prime-target relations in the object-sensitive bilateral medial OFC (defined functionally in a separate one-back task). Specifically, significantly stronger activations were observed for similar than dissimilar exemplars within a category. These results add important evidence that the OFC generates possible interpretations for visual input based on LSF, and that similar global shapes effectively engage this region for top-down facilitation in object recognition.
Meeting abstract presented at VSS 2013