August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
The semantic distance between a linguistic prime and a natural scene target predicts reaction times in a visual search experiment
Author Affiliations
  • Katerina Marie Simkova
    CHBH, School of Psychology, University of Birmingham
  • Jasper JF van den Bosch
    CHBH, School of Psychology, University of Birmingham
  • Damiano Grignolio
    CHBH, School of Psychology, University of Birmingham
  • Clayton Hickey
    CHBH, School of Psychology, University of Birmingham
  • Ian Charest
    cerebrUM, Département de Psychologie, Université de Montréal
Journal of Vision August 2023, Vol.23, 5055. doi:https://doi.org/10.1167/jov.23.9.5055
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Katerina Marie Simkova, Jasper JF van den Bosch, Damiano Grignolio, Clayton Hickey, Ian Charest; The semantic distance between a linguistic prime and a natural scene target predicts reaction times in a visual search experiment. Journal of Vision 2023;23(9):5055. https://doi.org/10.1167/jov.23.9.5055.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Does reading a semantically similar sentence description of a scene make you faster at subsequently detecting that scene in a visual search task? How does this vary across individuals when everyone has different conceptual knowledge? To investigate the former, 95 subjects were asked to identify a target natural scene in a visual search experiment where the display of a target scene and five semantically related distractors was preceded by a sentence prime. Every scene from our stimuli set was sampled as a target under three conditions: the prime was either the same as the target, halfway between the target and the semantically farthest item from the target, or the semantically farthest item from the target. The semantic distances between each pair of items were averaged Euclidean distances from the multi-arrangements (MA) task collected as part of the Natural Scenes Dataset (NSD). A subset of 27 subjects also completed MA on both the linguistic primes and scene targets, allowing us to use the idiosyncratic distances as predictors of each subject’s RTs. A generalised linear mixed effects model (GLMM) revealed that after removing the zero-distance condition (prime same as the target) the averaged NSD distances (n = 84, slope = .953, t = 5.229, p < .001) do indeed predict the RTs. Strikingly, the strongest effect emerged when the idiosyncratic distances from the captions MA (n = 26, slope = 1.165, t = 3.105, p < .01) were used as the predictors. These findings are significant in at least two major respects: firstly, linguistic primes promote visual targets the more semantically related they are. Secondly, this seems to be closely linked to the subject-specific similarity judgements which extends previous knowledge of semantic priming and opens a discussion as to what level the linguistic input intertwines with one's visual representations.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×