October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
The Interactive Effects of Scenes and Actions During Mental Model Construction
Author Affiliations
  • Adam Larson
    The University of Findlay
  • Carrigan Milner
    The University of Findlay
  • Bailey Rader
    The University of Findlay
  • Dalton Shevlin
    The University of Findlay
Journal of Vision October 2020, Vol.20, 1672. doi:https://doi.org/10.1167/jov.20.11.1672
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Adam Larson, Carrigan Milner, Bailey Rader, Dalton Shevlin; The Interactive Effects of Scenes and Actions During Mental Model Construction. Journal of Vision 2020;20(11):1672. https://doi.org/10.1167/jov.20.11.1672.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

When we observe situations, in film or in real-life, we begin to build a mental model of the situation in working memory. The model encodes different semantic concepts like the situation’s scene (e.g., ‘Kitchen’) and people’s actions (e.g., ‘Cooking’). Our previous research showed that the time-course of mental model construction begins by recognizing the scene category followed by the action. This suggests that the previously stored scene content could facilitate future action recognition. The current study examines if the scene category can facilitate action categorization, and whether this facilitation is due to low-level scene information? Our experiment presented actions while the image background was manipulated to contain either the scene background, a gray background, or a texture background. The texture condition was created by generating a texture from each original scene image. Afterwards, the action was cropped from the original scene image and placed onto its corresponding texture. Critically, the low-level scene information was similar between the scene and texture background conditions while the two conditions differed in scene category recognizability. Texture masks were also used to manipulate the image processing time from 24 - 660 ms SOA. After the mask, an action category post-cue was presented and participants made a Yes/No response to the validity of the cue. The results showed that action sensitivity was greater for the gray background condition than the scene background at 24 ms. Conversely, at 330 ms this effect reversed, indicating an interference effect during the earliest stage of mental model construction and facilitation at a later stage. Furthermore, our data show that action sensitivity was greater for the scene background condition than the texture background, indicating that the facilitation effect was not due to low-level scene information contained in the texture. Instead, action facilitation may be due to recognizing the scene.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.