Abstract
How do humans use predictive contextual information to facilitate visual search? A neural model explains challenging psychophysical data on positive vs. negative, spatial vs. object, and local vs. global cueing effects during visual search. The model also clarifies data from neuroanatomy, neurophysiology, and neuroimaging concerning the role of subregions in prefrontal cortex, medial temporal lobe, and visual cortices during visual search. In particular, model cells in dorsolateral prefrontal cortex prime possible target locations in posterior parietal cortex based on bottom-up activation of a representation of scene gist in parahippocampal cortex. Model ventral prefrontal cortex cells prime possible target identities on inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. Through simulations, the proposed model illustrates the dynamic processes of evidence accumulation in visual search, which incrementally integrates available spatial and object constraints to limit the search space, and offers new insights on the complex interplay among What and Where cortical areas orchestrating scene perception and scene memory.
Both authors are supported in part by the National Science Foundation (NSF SBE-0354378).