Abstract
Modern image-based models of search prioritize fixation locations using target maps that capture visual evidence for a target goal. But while many such models are biologically plausible, none have looked to the oculomotor system for design inspiration or parameter specification. These models also focus disproportionately on specific target exemplars, ignoring the fact that many important targets are categories (e.g., weapons, tumors). We introduce MASC, a Model of Attention in the Superior Colliculus (SC). MASC differs from other image-based models in that it is grounded in the neurophysiology of the SC, a mid-brain structure implicated in programming saccades—the behaviors to be predicted. It first creates a target map in one of two ways: by comparing a target image to objects in a search display (exemplar search), or by using a SVM-classifier trained on the target category to estimate the probability of search display objects being target category members (categorical search). MASC then projects this target map into the foveally-magnified space of the SC, where cascading operations average priority signals over visual and motor neural populations. Motor populations compete, with the vector average of the winning population determining the next saccade. We evaluated MASC against exemplar and categorical search datasets, where two groups of 15 subjects viewed identical search displays after presentation of exemplar or categorical target cues. MASC predicted saccade-distance traveled to the target and the proportion of immediate target fixations nearly as well as a Subject model created using the leave-one-out method. MASC's success stems from its incorporation of constraints from the saccade programming literature. Whereas most models of search explore different algorithms and parameter spaces to minimize prediction error, MASC takes its parameters directly from the brain. The brain already found the optimal parameters for search, and it is in the brain that we should look for model inspiration.
Meeting abstract presented at VSS 2016