Abstract
Models of visual search usually content themselves with predicting average results (e.g., the mean and, perhaps, distribution of response times). If they are precise in predictions about single trials, it is under very reduced conditions (e.g., few items, only one or two of which are salient). Using a hybrid foraging search paradigm, we have attempted to predict the specific targets that will be selected in a complex display, over the course of many seconds. In hybrid foraging, observers search for multiple instances of several types of target. 22 participants performed three search tasks, always foraging for two target types: Feature search (e.g., blue and green squares among yellow and red squares), Conjunction search (e.g., green circles and blue squares among green squares and blue circles), and Spatial Configuration search (e.g., "p" and "d" among "b" and "q"). Each display held 80140 moving items, 20-30 of these were target items. Observers were instructed to maximize the rate at which they clicked on targets. Targets disappeared when clicked and observers could switch to new displays at any time. A version of Guided Search was developed that could predict the identity and location of the next item to be selected. It had two basic rules: If there was another target of the type just collected within 400 pixels, the closest of those was selected; if not, the closest target of another type was selected. With these, we could predict the specific next item found for 60 of target collections in the Feature search, 46 in the Conjunction search and 39 in the Spatial Configuration search. This outperforms a baseline model that simply selects the closest target of any type, suggesting that search follows very stereotyped behavior especially when feature guidance is high.
Meeting abstract presented at VSS 2016