September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Seeing the road in the blink of an eye - rapid perception of the driver's visual environment
Author Affiliations
  • Benjamin Wolfe
    CSAIL, Massachusetts Institute of Technology
    AgeLab, Massachusetts Institute of Technology
  • Lex Fridman
    AgeLab, Massachusetts Institute of Technology
  • Anna Kosovicheva
    Department of Psychology, Northeastern University
  • Bryan Reimer
    AgeLab, Massachusetts Institute of Technology
  • Ruth Rosenholtz
    Brain and Cognitive Sciences, Massachusetts Institute of Technology
Journal of Vision August 2017, Vol.17, 561. doi:10.1167/17.10.561
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Benjamin Wolfe, Lex Fridman, Anna Kosovicheva, Bryan Reimer, Ruth Rosenholtz; Seeing the road in the blink of an eye - rapid perception of the driver's visual environment. Journal of Vision 2017;17(10):561. doi: 10.1167/17.10.561.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Natural scenes can be perceived in considerable detail with short stimulus durations. However, research on this has focused on static images, rather than dynamic video of the natural world. The ability to perceive the world quickly is particularly important in the context of driving, since in 400 ms a car going 65 mph (105kph) moves 40 feet (12 meters). To assess how quickly subjects can get the gist of a road scene, we used a scene prediction task where subjects were shown a short clip, followed by two still images. Subjects were shown two stills taken from the video following the clip and asked which of the two stills would come first. The clips ranged in length from 100 to 4000 ms. The pair of still images were taken from a minimum of 500 ms after the end of the clip, and were separated by 100 to 4000 ms. In addition, we performed a control experiment in which subjects were asked to discriminate which still frame came first without first viewing any video. In the control task, without any video, subjects achieved 70% accuracy, but this did not change as a function of separation between the still images. When subjects were shown a short clip prior to making their judgment, performance improved modestly (to 80%). However, a pattern emerged in which brief clips facilitated discrimination of the still images, with the effect being most pronounced for widely separated still images, indicating that the dynamic information was useful even with short clip durations. While one would not want to rely on such brief views of the world for driving, these results indicate that information about dynamic scenes – their gist – is available on a similar time course as the gist of static scenes.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×