Abstract
Our world is filled with complex objects. To make perceptual inferences, decisions, and action plans, humans must often judge the three-dimensional shape and orientation of objects they encounter. Despite the importance of view-invariant object perception, the computations underlying the 3D rotational discriminability of objects remain unclear. Here, we aimed to devise a metric that predicts viewpoint similarity for rotated novel objects, based on the projected position shifts of surface points as the object rotates. We created six 3D Shepard-Metzler-like objects of varying complexity, and rendered each from nineteen equally-spaced viewpoints rotated independently around the horizontal or vertical axes. For each object and viewpoint, participants were shown a standard pair of views of the object, along with a test pair, and used a mouse to rotate one of the test pair objects so that the test views were the same perceived rotational distance apart as the standard pair. If the reported distance between the test pair was smaller than the standard pair, this viewpoint was taken to be more rotationally discriminable, and vice versa. For n = 29 participants, our findings reveal substantial and consistent variations in perceived viewpoint similarity across different object orientations. We then developed a metric that predicted these variations, based on the sum of the projected displacement vectors of visible surface points as the object incrementally rotated from one viewpoint to the next. This metric predicted human rotational discriminability results for both horizontal and vertical rotations with striking accuracy [R = 0.60]. This suggests that to judge viewpoint similarity, observers mentally rotate the object and estimate the projected position shifts of points in the imagined view relative to the seen view. This metric provides a computational correlate linking theories of viewpoint discrimination and mental rotation.