Purchase this article with an account.
William Hayward, Achille Pasqualotto; 2D images are not sufficient for testing 3D object recognition. Journal of Vision 2008;8(6):514. doi: 10.1167/8.6.514.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Most current models of object recognition assume that initial input is two-dimensional (2D). We tested this assumption by displaying familiar objects in two different conditions; mono, where stimuli were displayed as flat, 2D images, and stereo, where objects were displayed with stereoscopic depth information. In a series of experiments, participants performed a sequential matching task, where an object was rotated up to 180° between presentations. The pattern of viewpoint costs differed markedly between the two conditions. In the mono condition, performance costs due to rotation were highest at rotations of 60° or 120°, a finding attributed to these views having an outline shape that was maximally dissimilar from that at 0°. In the stereo condition, however, performance costs increased monotonically as the rotation size increased, with highest viewpoint costs at 180°. The only exception to this rule came from an experiment in which the initial viewpoint of the object showed it with the axis of elongation running parallel to the viewing plane, that is, a side-on view; here the mono and stereo depictions were very similar (as all components of the object were roughly the same distance from the viewer) and in both conditions recognition performance was better following rotations of 180° than for smaller rotations. These results suggest that 3D objects are encoded with a mixture of 2D and 3D information. As the features of an object have greater variation in 3D space, 3D cues seem to become more salient, and recognition performance may deviate from that predicted by 2D information. As such, these results provide a challenge for experiments that test theoretical models of 3D object recognition by using 2D images; such images may provide an appropriate test of these models under only some viewing conditions.
This PDF is available to Subscribers Only