August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Mainly the actions: Functional knowledge has a primary role in understanding real-world scenes portrayed by either fine or coarse visual information
Author Affiliations
  • Krystian Ciesielski
    School of Psychology, Keele University, UK
  • Andrew Webb
    Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
  • Sara Spotorno
    Psychology Department, Durham University, UK
Journal of Vision August 2023, Vol.23, 5689. doi:https://doi.org/10.1167/jov.23.9.5689
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Krystian Ciesielski, Andrew Webb, Sara Spotorno; Mainly the actions: Functional knowledge has a primary role in understanding real-world scenes portrayed by either fine or coarse visual information. Journal of Vision 2023;23(9):5689. https://doi.org/10.1167/jov.23.9.5689.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Studies on how individuals understand real-world scenes, and form predictions about them, have traditionally focused on taxonomic knowledge about the scene’s content (its structure and the objects it contains). Recently, functional knowledge, which represents the actions afforded by a scene, has been proposed as a fundamental dimension of scene processing. However, it is unclear how these two kinds of knowledge are related, and in particular whether functional scene understanding requires the mediation of object information. We examined how taxonomic (specifically object-based) and functional (action-based) rapid scene understanding use visual information about fine, local features and objects, conveyed by high spatial frequencies (HSF), and coarse, contextual features, conveyed by low spatial frequencies (LSF). In each trial across four experiments, we presented an HSF or LSF filtered scene and two object or action words, one highly consistent with the scene and the other inconsistent. Participants reported which word was consistent. In the first two experiments, the words were shown simultaneously as primes, followed by the scene image, which appeared until response (Exp.1, online) or for 150ms followed by a 100ms frequency-matched pink-noise mask (Exp.2, online). Exp.3 (online) reversed the word-scene sequence, using the visual scene as a prime, with the same presentation times as in Exp.2. Exp.4 (lab-based), used the paradigm of Exp.2, but added a bandstop filter including HSF and LSF. Responses were faster for action than object words, in all experiments except in the bandstop condition, which showed no differences. We also reported greater accuracy for action words across most experiments. These results suggest that, independently of whether it has been activated before the scene instance is encountered, functional knowledge has a primary role in scene understanding when only fine or coarse visual features are provided and does not require the mediation of object information.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×