Purchase this article with an account.
James C. Christensen, James T. Todd; What image measures are best correlated with the discriminability of 3D objects?. Journal of Vision 2006;6(6):321. doi: https://doi.org/10.1167/6.6.321.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
A sequential shape matching task was performed to measure the relative discriminability of 3D objects. Each trial began with a brief presentation of a “sample” object, followed by a pattern mask, and the image of a “test” object with the same or different orientation in depth. Observers indicated whether or not the two depicted objects had the same shape by pressing an appropriate response key. For each standard object, eight possible shape differences were created by altering the relative length, curvature or orientation of its component parts. Half of these shape changes involved metric properties of the objects, whereas the remaining changes involved properties that are viewpoint invariant. Several different measures were used to calculate the similarity between each pair of images. These included Euclidean distances defined with respect to pixel intensity values or the outputs of Gabor filters at multiple scales and orientations. These low level measures were poor predictors of human discrimination, accounting for only 39% of the variance among different object pairs. Image similarity was also measured by the relative magnitude of the parameter change that had been used to produce each of the depicted shape differences. This latter measure was a much better predictor of human performance, accounting for over 78% of the variance. Viewpoint invariant changes were easier to detect than metric changes when the sample and test objects were presented at different orientations in depth, but both types of change were equally detectable when the objects were presented at the same orientation.
This PDF is available to Subscribers Only