Abstract
Experiments exploring visual search in familiar real-world scenes usually consider target detection in individual scenes. They rarely consider the influence of the relationship between sequences of scenes on target detection. Specifically, they do not consider how the ability to predict the form and structure of a forthcoming scene influences target detection. In this study we explored the effect of temporal order of scenes on the detection of targets placed within photographs of residential areas. In one condition the temporal order was consistent with a route being followed by the participant. In a second condition the temporal order was randomized. Targets were defined as tools, with twenty instances of tools being used. All tools fitted naturally within the context of all of the scenes and were placed in positions at which they would ordinarily occur. Forty scenes were used in each condition. Participants made target present/absent decisions by pressing a button across eight iterations of the forty scenes. Target position changed across iterations in a pseudorandom manner. Following responses in the route condition, scenes remained on the screen for one second to allow a cue to be presented indicating the position of the next scene. In the randomized condition, no cue was presented. Response latency and accuracy across conditions were recorded. Eye movements were also recorded. The results showed a learning effect across the first three blocks with improvements to the speed and accuracy of target detection, and a numerical reduction in the number of fixations. Surprisingly, we found no evidence that presenting the images in a predictable order influenced responses or eye movement measures. The results suggest that viewing scenes in a fixed temporal order, as if following a route, is of limited benefit to visual search when target position is unpredictable.
Meeting abstract presented at VSS 2015