August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Hardware and software computing architecture for robotics applications of neuroscience-inspired vision and navigation algorithms
Author Affiliations
  • Chin-Kai Chang
    iLab, Computer Science Department, University of Southern California
  • Christian Siagian
    iLab, Computer Science Department, University of Southern California
  • Laurent Itti
    iLab, Computer Science Department, University of Southern California
Journal of Vision August 2010, Vol.10, 1056. doi:https://doi.org/10.1167/10.7.1056
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chin-Kai Chang, Christian Siagian, Laurent Itti; Hardware and software computing architecture for robotics applications of neuroscience-inspired vision and navigation algorithms. Journal of Vision 2010;10(7):1056. https://doi.org/10.1167/10.7.1056.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Biologically-inspired vision algorithms have thus far not been widely applied to real-time robotics because of their intensive computation requirements. We present a biologically-inspired visual navigation and localization system which is implemented in real-time using a cloud computing framework. We create a visual computation architecture on a compact wheelchair-based mobile platform. Our work involves both a new design of cluster computer hardware and software for real-time vision. The vision hardware consists of two custom-built carrier boards that host eight computer modules (16 processor cores total) connected to a camera. For all the nodes to communicate with each other, we use ICE (Internet Communication Engine) protocol which allow us to share images and other intermediate information such as saliency maps (Itti & Koch 2001), and scene “gist” features (Siagian & Itti 2007). The gist features, which coarsely encode the layout of the scene, are used to quickly identify the general whereabouts of the robot in a map, while the more accurate but time consuming salient landmark recognition is used to pin-point its location to the coordinate level. Here we extend the system to also be able to navigate in its environment (indoors and outdoors) using these same features. That is, the robot has to identify the direction of the road, use it to compute movement commands, perform visual feedback control to ensure safe driving over time. We utilize four out of eight computers for localization (salient landmark recognition system) while the remainder are used to compute navigation strategy. As a result, the overall system performs all these computing tasks simultaneously in real-time at 10 frames per second. In short, with the new design and implementation of the highly-capable vision platform, we are able to apply computationally complex biologically-inspired vision algorithms on the mobile robot.

Chang, C.-K. Siagian, C. Itti, L. (2010). Hardware and software computing architecture for robotics applications of neuroscience-inspired vision and navigation algorithms [Abstract]. Journal of Vision, 10(7):1056, 1056a, http://www.journalofvision.org/content/10/7/1056, doi:10.1167/10.7.1056. [CrossRef]
Footnotes
 NSF, ARO, General Motors, and DARPA.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×