September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
When Does Scene Categorization Inform Action Recognition?
Author Affiliations
  • Adam Larson
    Psychology Department, University of Findlay
  • Melinda Lee
    Psychology Department, University of Findlay
Journal of Vision September 2015, Vol.15, 118. doi:https://doi.org/10.1167/15.12.118
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Adam Larson, Melinda Lee; When Does Scene Categorization Inform Action Recognition?. Journal of Vision 2015;15(12):118. https://doi.org/10.1167/15.12.118.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When comprehending a film, viewers rapidly construct a working memory representation of the narrative called an event model. These models encode the story location first (Kitchen vs. Park) followed by the character’s action (Cooking vs. Washing dishes)(Larson, Hendry, & Loschky, 2012). This time course for scene and action categorization was also supported by recent research showing that action recognition is best when it was embedded in real scenes than a gray background. Although, this benefit was not present at early processing times (< 50 ms SOA)(Larson, et al., 2013). This suggests that scene and action recognition are functionally isolated processes at early processing times. However, this conclusion may be an artifact of the design used. Namely, actions from the same scene category were presented in blocks, allowing participants in the gray background condition to predict the scene category that would be presented without relying on the scene’s perceptual information. If true, then presenting actions in a random sequence should eliminate this advantage. Participants were assigned to one of three different viewing conditions. Actions were presented either in their original scene background, on a neutral gray background, or on a texture background generated from the original scene (Portilla & Simoncelli, 2000). Visual masking was used to control processing time which varied from 24 to 365 ms SOA. Afterwards, a valid or invalid action category post-cue was presented requiring participants to make a Yes/No response. The results show no difference between the original and gray background conditions at early processing times (< 50 ms SOA), but both conditions were better than the texture background. After 50 ms SOA, performance for the original background was greater than the gray and texture conditions. The data indicates that sufficient scene categorization processing (~ 50 ms SOA) is required before it can inform action categorization.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×