Abstract
The rich and noisy array of sensory inputs constantly confronted by the brain, along with its limited sensing abilities, requires humans and animals to actively and efficiently acquire sensory information in a goal-directed manner. One prime example is active visual processing, in which the high-acuity fovea can only attend a single location at a time. In this work, we examine how humans combine top-down spatial knowledge about target location with bottom-up sensory inputs to optimize performance in an active visual search task. More specifically, we impose spatial regularity in target location and examine whether subjects internalize this information, and, if so, how it influences motor planning and/or sensory processing in the search task. Moreover, we investigate how subjects' search strategy adapts to the target's spatial distribution over different configurations, thus revealing how they integrate and extract the relevant information across trials. In terms of motor planning, we find that subjects sequentially fixate the most likely to least likely locations in their search strategy. In terms of sensory processing, we find that subjects have higher false alarm rate in the more likely target locations, and they also take longer to reject a high-probability location when it does not contain the target and less time to confirm such a location when it does contain the target. In terms of learning and adaptation, subjects show improvement of performance at multiple timescales. In addition, we examine if motor modality affects subjects' performance, and the results show no significant difference between saccadic and manual search conditions. Altogether, our results suggest that subjects can learn an abstract representation of spatial statistics in the environment, and exploit this knowledge to optimize both action planning and sensory processing.