Abstract
Learned associations between objects and sounds influence how we search through our real world (e.g. sirens of an ambulance). Previous studies have demonstrated that non-spatial sounds facilitate visual search performance in artificial displays but not in scene search (Seidel, Draschkow & Võ, 2017). In two experiments, we tested how sounds facilitate search whereby the experimental setup mimicked prior hybrid search experiments (Wolfe, 2012). In Experiment 1, participants memorized eight objects along with their characteristic sound. Subsequently, participants searched for the target objects held in memory. Before the onset of the search display one of the four sound conditions – natural, distractor, scrambled, vs. no sound – was played. In the natural sound condition, observers heard the sound associated with the target, whereas in the distractor condition a sound associated with another item in memory that was not present in the display was presented. In the scrambled condition a distorted, unrecognizable sound was played, and in the no sound condition observers did not hear anything before beginning their search. In the second experiment, we varied the amount of items held in memory across blocks (4, 8,16) and the amount of objects presented during search across search trials (4, 8, 16). Results from both experiments showed that reaction times were faster for searches accompanied by natural sounds and slower for the distractor compared to the neutral condition. Experiment 2 demonstrated that this benefit could be attributed to a change in memory search efficiency – presumably due to a prioritization of the target. We found no change in visual search efficiency. Our results implicate that the involvement of several factors for auditory facilitation of visual search - such as stimulus material, task relevance, SOA, memorability, and number of objects held in memory or present in the search display - determines whether the bzzzzzzzz facilitates search.
Acknowledgement: This work was supported by DFG grant VO 1683/2-1 and by SFB/TRR 135 project C7 to MLV. We also want to thank Nina Jaeschke, Evelyn Richter, Charlotte Habers, Kim-Veronique Knöpfel, Sarah Peetz and Valerie Tanzberger for assistance in data acquisition.