Abstract
Hulleman (2010) reported equal performance for easy visual search amongst static and amongst moving items. We investigated a possible role of motion filtering (McLeod et al., 1988) in achieving this robustness. Method: Participants searched for a T amongst L's. Display sizes were 12 and 18. In Experiment 1, the target and half of the distractors always moved with 7.2 deg/s. The remaining distractors moved all with either 7.2, 3.6, 1.8, 0.9, 0.45 or 0 deg/s. Half of the participants knew the target velocity. Experiment 2 contained four motion conditions (deg/s): target 7.2, all distractors 7.2; target 0, all distractors 0; target 7.2, half distractors 0, half distractors 7.2 and target 0, half distractors 0, half distractors 7.2. All participants knew the target velocity. Eye movements were recorded. Results: In Experiment 1, knowing the target velocity did not improve search performance. For both groups, reaction times on present trials became marginally faster (around 40 ms) as the velocity of half of the distractors was reduced to 0 deg/s. Reaction times for absent trials decreased around 220 ms. Experiment 2 yielded similar results. The larger decrease in reaction times for absent trials was caused by an increased willingness to terminate search, rather than improved search efficiency due to a halving of the number of potential targets. Importantly, participants did use their instruction: average gaze distance to items moving with the target velocity was always smaller than to items moving with a different velocity. Conclusion: Neither the velocity of the search items nor velocity differences between them had a large influence on search efficiency. Hence, motion filtering plays only a very limited role when targets are not motion conjunctions. This suggests that previous results considered as support for motion filtering in visual search might be better interpreted as evidence for motion-based depth stratification.
Supported by The Leverhulme Trust (F/00 181/T).