Abstract
Humans can recover 3-D structure from the projected 2D motion field of a rotating object. This phenomenon is called structure from motion (SFM). Current models of SFM perception are limited to the case in which objects rotate about a frontoparallel axis. However, as our recent psychophysical studies showed, frontoparallel axes of rotation are not representative of the general case. Here we present the first model to address the problem of SFM perception for the general case of rotations around an arbitrary axis. The SFM computation is cast as a two-stage process. In the first stage the structure perpendicular to the axis of rotation is computed from the component of the retinal speeds perpendicular to the axis of rotation. In the second stage a correction to the depth structure from the first stage corrects for the rotational axis's slant. This computation results in an object shape that is invariant with respect to the observer's viewpoint. The model provides quantitative predictions that agree well with current psychophysical data for both frontoparallel and non-frontoparallel rotations. It also challenges previous claims about depth-order violations and inconsistencies in recovered object structure.