Purchase this article with an account.
Benjamin Wolfe, Lex Fridman, Anna Kosovicheva, Bryan Reimer, Ruth Rosenholtz; Seeing the road in the blink of an eye - rapid perception of the driver's visual environment. Journal of Vision 2017;17(10):561. doi: 10.1167/17.10.561.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Natural scenes can be perceived in considerable detail with short stimulus durations. However, research on this has focused on static images, rather than dynamic video of the natural world. The ability to perceive the world quickly is particularly important in the context of driving, since in 400 ms a car going 65 mph (105kph) moves 40 feet (12 meters). To assess how quickly subjects can get the gist of a road scene, we used a scene prediction task where subjects were shown a short clip, followed by two still images. Subjects were shown two stills taken from the video following the clip and asked which of the two stills would come first. The clips ranged in length from 100 to 4000 ms. The pair of still images were taken from a minimum of 500 ms after the end of the clip, and were separated by 100 to 4000 ms. In addition, we performed a control experiment in which subjects were asked to discriminate which still frame came first without first viewing any video. In the control task, without any video, subjects achieved 70% accuracy, but this did not change as a function of separation between the still images. When subjects were shown a short clip prior to making their judgment, performance improved modestly (to 80%). However, a pattern emerged in which brief clips facilitated discrimination of the still images, with the effect being most pronounced for widely separated still images, indicating that the dynamic information was useful even with short clip durations. While one would not want to rely on such brief views of the world for driving, these results indicate that information about dynamic scenes – their gist – is available on a similar time course as the gist of static scenes.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only