July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Oddness at a glance: Unraveling the time course of typical and atypical scene perception
Author Affiliations
  • Abraham Botros
    Computer Science Department, Stanford University
  • Michelle Greene
    Computer Science Department, Stanford University
  • Li Fei-Fei
    Computer Science Department, Stanford University
Journal of Vision July 2013, Vol.13, 1048. doi:https://doi.org/10.1167/13.9.1048
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Abraham Botros, Michelle Greene, Li Fei-Fei; Oddness at a glance: Unraveling the time course of typical and atypical scene perception. Journal of Vision 2013;13(9):1048. https://doi.org/10.1167/13.9.1048.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Our ability to quickly recognize the "gist" of a scene is nothing short of remarkable. However, little is known about the content of mental representations built during brief glances. To what extent does scene gist perception rely on prior experience and expectations? In the face of atypical input, is additional processing necessary for recognition? Here, we examined the perceptual time course of both typical and atypical scene stimuli. We used a carefully-selected collection of real-world scene images, consisting of 50 "odd" and 50 "doppelganger" images. "Odd" images contained improbable real-world situations, such as divers signing papers underwater or a wild animal on a couch. "Doppelgangers" were visually similar to their "odd" counterparts except for the root "oddness." We assessed scene perception using a free-response system coupled with variable presentation time. Ten participants viewed odd and doppelganger images at counter-balanced presentation times (20ms, 40ms, 80ms, 150ms, and 500ms, masked); participants were instructed to type descriptions of what they saw in as much detail as possible. Responses were analyzed using a concept tree in an Amazon Mechanical Turk (AMT) interface. AMT workers evaluated the general correctness and detail, the number and specificity of objects and scene details mentioned, and the demonstrated understanding of the oddness in the picture. There was a steady increase in all of the aforementioned factors as presentation time increased. In addition, all of these factors showed poorer performance for odd images compared to doppelgangers. In particular, participants required between 150ms and 500ms to correctly describe odd images. In shorter presentation times, participants had a defined tendency to rationalize impoverished visual input into sensible explanations more akin to normal visual experience. Overall, this implicates the possibility of top-down constraints imposed on early sensory input in order to maximize hypothesis likelihood, especially for atypical real-world scenes.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×