Abstract
Finding a particular object, such as your car keys, requires your brain to compare visual information about the content of the currently viewed scene with working memory information about the identity of your target. This comparison is thought to be implemented via feedback to the ventral visual pathway such that working memory signals influence how visual information is processed, but these mechanisms are poorly understood. To investigate, we recorded the responses of neurons in inferotemporal cortex (IT) and a nearby projection area, perirhinal cortex (PRH), as subjects performed a task that required them to find targets in sequences of distractors. As documented by others, neurons in IT and PRH tended to reflect difficult to understand mixtures of task-relevant information. We thus turned to population-based approaches to probe the degree to which the combined population response in each brain area reflected the task solution, i.e. whether an image was a target or a distractor. We found that total information for this task, assessed by an ideal observer read-out, was matched in the two brain areas whereas linearly separable information, assessed by a linear read-out, was higher in PRH. These results reveal that during target search, the comparison of visual and working memory information is implemented across at least two processing stages: one in which visual and working memory signals are combined to produce a nonlinearly separable IT target match representation, followed by computations in PRH that reformat this representation such that it can be accessed by a linear read-out.