Purchase this article with an account.
Tadamasa Sawada, Yunfeng Li, Zygmunt Pizlo; Solving the correspondence problem between two views using a priori constraints. Journal of Vision 2011;11(11):343. https://doi.org/10.1167/11.11.343.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
When a 3D scene is viewed with two eyes (or with one eye before and after head translation), 3D features in the scene are projected to different positions on the retinas. This difference is called binocular or motion parallax. Before the parallax is used to solve 3D shape and scene recovery problems, the correspondence between pairs of the projected features of the same 3D points in the two retinal images must be established. Our everyday life experience suggests that the correspondence problem is almost always solved by our visual system quickly and correctly. This observation contrasts with computational difficulty of the problem: there are very many features in real 3D scenes, and a brute force search for correct correspondences in the retinal images will lead to a combinatorial explosion. As in every ill-posed inverse problem, a priori constraints are required. We propose a computational model that solves the correspondence problem using a constraint that objects in the scene, as well as the observer (robot) are resting on a common horizontal floor. The robot acquires images from its two cameras, whose lines of sight are parallel. The robot knows its own height, and the orientation of its cameras relative to gravity is measured by an inclinometer (the inclinometer's accuracy is similar to the accuracy of the human vestibular system). We show that the left image of the floor is a shear transformation of the right image. This makes the correspondence problem for the floor texture trivial. The same transformation applies to the bottom parts of the objects resting on the floor. Finally, the correspondences of the remaining features of the objects are established by proceeding from the bottom parts of the objects towards their top. We will show results of the robot's performance with pairs of images of real scenes.
This PDF is available to Subscribers Only