Abstract
Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Recent studies found human response times to be as fast as 120ms in a dual-presentation (2-AFC) setup (Kirchner, Thorpe 2006). In most previous experiments, pairs of randomly chosen images were presented, frequently from very different contexts (e.g. a zebra in Africa vs. the New York Skyline). Here, we tested the effect of context on performance by using a new, contiguous-context image set.
Individual images contained a single animal surrounded by a large, animal-free image area. The image could be positioned and cropped in such a manner that the animal could occur in one of eight evenly spaced positions on an imaginary circle (radius 10 deg visual angle). In the first (8-way) experiment, all eight positions were used, whereas in the second (2-way) and third (2-afc) experiment the animals were only presented on the two positions to the left and right of the screen center. In the third experiment, additional rectangular frames were used to mimick the conditions of earlier studies.
Absolute hit ratios were on average slightly lower on the 8-way than in both other conditions (8-way:81%, 2-way:88%, 2-afc:87%), yet the range-normalized results show a slight advantage in performance for the 8-way condition (8-way:78%, 2-way:75%, 2-afc:73%). Average latencies on successful trials were similar in all three conditions (8-way:207ms, 2-way:198ms, 2-afc:203ms), indicating that the number of possible animal locations within the display does not affect decision latency.
These results illustrate that animal detection is fast and efficient even when the animals are embedded in their natural backgrounds and could occur in arbitrary locations in an image.
Supported by DFG grant TR 528 / 1–4 (J. Drewes and J. Trommershaeuser)