August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Mobile Robot Vision Navigation Based on Road Segmentation and Boundary Extraction Algorithms
Author Affiliations
  • Chin-Kai Chang
    Computer Science,University of Southern California
  • Christian Siagian
    Division of Biology,California Institute of Technology
  • Laurent Itti
    Computer Science,University of Southern California
Journal of Vision August 2012, Vol.12, 200. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chin-Kai Chang, Christian Siagian, Laurent Itti; Mobile Robot Vision Navigation Based on Road Segmentation and Boundary Extraction Algorithms. Journal of Vision 2012;12(9):200.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

In order to safely navigate in an unknown environment, one has to be able to recognize and traverse the road.

We present a monocular vision-based road recognition system that utilizes two opposing vision approaches based on region appearance similarity as well as road boundary detection. Our algorithm works in a general road setting and requires no training or camera calibration to maximize its adaptability to any environment.

The appearance similarity approach segments the image into multiple regions and selects the road area by comparing each region's appearance with prior knowledge of the current road, extracted from previous frames (Chang et al., VSS 2011 ).

On the other hand, the road boundary detector estimates the road structure by using the vanishing point in the image. To find the vanishing point, we first compute dyadic gabor pyramids to generate edge maps, which are then used to vote for the most likely vanishing point. From there, we extend a set of rays, spaced every five degrees, and we choose the one most consistent with the underlying edge map orientation. In addition, we also take into account the color difference between the two sides of the rays, which would indicate a likely road boundary. We repeat the process to obtain the other side of the road.

We then fuse both region appearance and road boundary information using a Kalman filter to obtain a robust road estimation. We then use it for robot vision navigation by aligning the estimated road center with the robot center.

We test the vision navigation system at four sites (one indoor and three outdoor environments) using our mobile robot, Beobot 2.0. The system is able to navigate the robot through entire route for a total 606.69m.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.