September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Mobile robot vision navigation and obstacle avoidance based on gist and saliency algorithms
Author Affiliations
  • Chin-Kai Chang
    Computer Science, University of Southern California, USA
  • Christian Siagian
    Biology, California Institute of Technology, USA
  • Laurent Itti
    Computer Science, University of Southern California, USA
Journal of Vision September 2011, Vol.11, 927. doi:https://doi.org/10.1167/11.11.927
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chin-Kai Chang, Christian Siagian, Laurent Itti; Mobile robot vision navigation and obstacle avoidance based on gist and saliency algorithms. Journal of Vision 2011;11(11):927. https://doi.org/10.1167/11.11.927.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Two of the important capabilities needed for scene understanding are extracting the gist of the scene and identifying salient regions in the image. Here we present a robotic vision system that utilizes these two modules to understand its surrounding from an image. That is, we would like the robot to be able to localize and navigate in its environment. We present a vision-based navigation and localization system using the two biologically-inspired scene understanding models. For localization, gist, which captures the holistic characteristics and layout of an image, coarsely localize the robot to within the general vicinity. Then, saliency, which emulates the visual attention of primates, refine the location information by recognizing the detected conspicuous regions in the image.

For the vision navigation sub-system, we use the gist features to identify the road region. Here, the image is segmented into multiple regions, which are then classified using the gist features to find most likely road region. By incorporating knowledge about the road geometry, the system is able to locate the center of the road as well as avoid obstacles. At the same time, we also use the recognized salient regions to prime the location of the road in the image. Furthermore these regions provides high level navigation parameters such as distance to the junction and overall heading of the road (Chang et al., 2010). The navigation system then uses the estimated road parameters to perform visual feedback control to direct the robot's heading and to go to a user-provided goal location.

We test the vision localization and navigation system at four sites (one indoor and three outdoor environments) using our mobile robot, Beobot 2.0. The system is able to keep robot in the center of the lane with a route length over 138.27 m.

NSF, ARO, General Motors, and DARPA. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×