Purchase this article with an account.
Chin-Kai Chang, Christian Siagian, Laurent Itti; Mobile Robot Vision Navigation Based on Road Segmentation and Boundary Extraction Algorithms. Journal of Vision 2012;12(9):200. doi: https://doi.org/10.1167/12.9.200.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In order to safely navigate in an unknown environment, one has to be able to recognize and traverse the road.
We present a monocular vision-based road recognition system that utilizes two opposing vision approaches based on region appearance similarity as well as road boundary detection. Our algorithm works in a general road setting and requires no training or camera calibration to maximize its adaptability to any environment.
The appearance similarity approach segments the image into multiple regions and selects the road area by comparing each region's appearance with prior knowledge of the current road, extracted from previous frames (Chang et al., VSS 2011 ).
On the other hand, the road boundary detector estimates the road structure by using the vanishing point in the image. To find the vanishing point, we first compute dyadic gabor pyramids to generate edge maps, which are then used to vote for the most likely vanishing point. From there, we extend a set of rays, spaced every five degrees, and we choose the one most consistent with the underlying edge map orientation. In addition, we also take into account the color difference between the two sides of the rays, which would indicate a likely road boundary. We repeat the process to obtain the other side of the road.
We then fuse both region appearance and road boundary information using a Kalman filter to obtain a robust road estimation. We then use it for robot vision navigation by aligning the estimated road center with the robot center.
We test the vision navigation system at four sites (one indoor and three outdoor environments) using our mobile robot, Beobot 2.0. The system is able to navigate the robot through entire route for a total 606.69m.
Meeting abstract presented at VSS 2012
This PDF is available to Subscribers Only