Abstract
Does reading a semantically similar sentence description of a scene make you faster at subsequently detecting that scene in a visual search task? How does this vary across individuals when everyone has different conceptual knowledge? To investigate the former, 95 subjects were asked to identify a target natural scene in a visual search experiment where the display of a target scene and five semantically related distractors was preceded by a sentence prime. Every scene from our stimuli set was sampled as a target under three conditions: the prime was either the same as the target, halfway between the target and the semantically farthest item from the target, or the semantically farthest item from the target. The semantic distances between each pair of items were averaged Euclidean distances from the multi-arrangements (MA) task collected as part of the Natural Scenes Dataset (NSD). A subset of 27 subjects also completed MA on both the linguistic primes and scene targets, allowing us to use the idiosyncratic distances as predictors of each subject’s RTs. A generalised linear mixed effects model (GLMM) revealed that after removing the zero-distance condition (prime same as the target) the averaged NSD distances (n = 84, slope = .953, t = 5.229, p < .001) do indeed predict the RTs. Strikingly, the strongest effect emerged when the idiosyncratic distances from the captions MA (n = 26, slope = 1.165, t = 3.105, p < .01) were used as the predictors. These findings are significant in at least two major respects: firstly, linguistic primes promote visual targets the more semantically related they are. Secondly, this seems to be closely linked to the subject-specific similarity judgements which extends previous knowledge of semantic priming and opens a discussion as to what level the linguistic input intertwines with one's visual representations.