Abstract
Introduction: Feature-based visual attention can be quantified by deriving “attention filters” for any continuously-variable feature (e.g. contrast) using the centroid paradigm (Sun et al., 2016, AP&P, 78: 474–515). In this paradigm, subjects attempt to estimate the centroid of items with a certain feature value (e.g., the next-to-lowest contrast) among distractor items with other feature values (e.g. higher and lower contrasts). In this work, we introduce a dynamic version of the centroid paradigm, and use it to derive attention filters for motion statistics. Methods: Subjects (15 naïve, two authors) were instructed to follow, using a trackpad, the centroid of a target subset of small elements (11×11 arcmin, light gray against darker background) that moved with a certain distribution of temporal frequencies amongst two distractor subsets that had different temporal frequency distributions. Results: Attention filters were basically optimal when only a single target was tracked, but became less so overall as the number of targets increased. Subjects performed the task well when asked to track targets with the highest and lowest average speeds, weighting the targets more highly in their centroid estimates than the distractors. However, subjects performed poorly when asked to track targets with the middle average speed, instead weighting the slow-moving distractors more heavily than the actual targets in their centroid estimates, in most cases. Discussion: The continuous-tracking version of the centroid task allows the study of motion statistics as the bases for attention or stimulus segregation. It also adds tremendous power in that the response data are collected at 60 Hz, leading to very stable filter estimates in a very short amount of experimental time. Further, naïve subjects found the task to be intuitive; with no explicit training their data were not only very stable but also indistinguishable from those of experienced observers.
Acknowledgement: R01 EY-020592, U01 NS094330