August 2014
Volume 14, Issue 10
Vision Sciences Society Annual Meeting Abstract  |   August 2014
The Time-Course of Scene and Action Categorization in Dynamic Videos
Author Affiliations
  • Adam Larson
    Psychology Department, University of Findlay
  • Hope Tebbe
    Psychology Department, University of Findlay
  • Lester Loschky
    Department of Psychological Sciences, Kansas State University
Journal of Vision August 2014, Vol.14, 374. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Adam Larson, Hope Tebbe, Lester Loschky; The Time-Course of Scene and Action Categorization in Dynamic Videos . Journal of Vision 2014;14(10):374.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

When watching movies, viewers' comprehension begins by constructing a working memory representation, called an event model (Zacks, et al., 2007). Our previous research has examined how these models are constructed by measuring the time-course of scene and action categorization (Larson, Hendry, & Loschky, 2012). The results support a coarse-to-fine order of categorization such that at early processing times, Superordinate scene categorization (e.g., Indoor vs. Outdoors) was better than Basic level scene categorization (e.g., Yard vs. Park), which was better than Basic level action categorization (e.g., Raking vs. Mowing), suggesting that event model construction begins with the superordinate scene category. However, action categorization performance could have been reduced due to the lack of biological motion information in static scene images. Thus, the current study examined the time-course of scene and action categorization in dynamic scene videos. Hypotheses: The onset of motion often captures attention (Abrams & Christ, 2003), which together with biological motion information could result in an early advantage for action categorization. Conversely, categorization might occur in a coarse-to-fine fashion, consistent with our previous research (Loschky & Larson, 2010). We tested these competing hypotheses by randomly assigning participants to one of three categorization conditions (Superordinate scene, Basic scene, or Basic Action). Eye-tracking and visual masking were used to manipulate processing time, which varied from sub-fixations (33 ms and 200 ms SOA) to fixations (1, 2, 3, and 4 fixations). A valid or invalid post-cue (category label) was then presented, which required a "Yes" or "No" response. The results showed a coarse-to-fine categorization order. At sub-fixation processing times, performance was best for Superordinate scene categorization and worst for Basic level action categorization. However, all categorization tasks reached ceiling performance within one fixation. Results are consistent with the hypothesis that the superordinate scene category is used first to construct an event model.

Meeting abstract presented at VSS 2014


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.