Abstract
Visual search as a daily task becomes particularly challenging when the time to find the target is limited. In a pilot study, we investigate whether a subtle modulation of visual content considering the scene saliency map can improve visual search performance in terms of the search time and proportion of failure to find the target within a limited time. To do so, a set of naturalistic omnidirectional images were displayed in virtual reality (VR) with a search target being overlaid on the visual scene at a random location. During the experiment, each of five participants performed a visual search task in this virtual environment with an ultimate goal to find the search target as soon as possible within 20 seconds. By subtle modulation of the visual content we intended to redirect the observer’s attention from salient regions of the scene, and, that way, enable the participant to find the search target faster. The scenes were modified applying blur, where blurring strength was spatially varied based on the saliency maps computed for the displayed omnidirectional images before the experiment. Specifically, the most salient regions of the scenes were blurred using a convolution of the original image with a Gaussian kernel. The maximal strength of blur was varied via different standard deviations of the Gaussian filter, defining three different experimental conditions. The mean search time, as well as the proportion of trials where participants failed to find the target, were compared among different strength values of a blur. Using linear mixed-effect model analysis, a significant decrease of search time as well as a significant reduction of the proportion of failed trials, as a function of blur strength, were found. Thus, this pilot study shows a possibility to improve visual search performance in realistic 3D scenes by applying a subtle saliency-aware scene modulation.