Abstract
Biologically-inspired vision algorithms have thus far not been widely applied to real-time robotics because of their intensive computation requirements. We present a biologically-inspired visual navigation and localization system which is implemented in real-time using a cloud computing framework. We create a visual computation architecture on a compact wheelchair-based mobile platform. Our work involves both a new design of cluster computer hardware and software for real-time vision. The vision hardware consists of two custom-built carrier boards that host eight computer modules (16 processor cores total) connected to a camera. For all the nodes to communicate with each other, we use ICE (Internet Communication Engine) protocol which allow us to share images and other intermediate information such as saliency maps (Itti & Koch 2001), and scene “gist” features (Siagian & Itti 2007). The gist features, which coarsely encode the layout of the scene, are used to quickly identify the general whereabouts of the robot in a map, while the more accurate but time consuming salient landmark recognition is used to pin-point its location to the coordinate level. Here we extend the system to also be able to navigate in its environment (indoors and outdoors) using these same features. That is, the robot has to identify the direction of the road, use it to compute movement commands, perform visual feedback control to ensure safe driving over time. We utilize four out of eight computers for localization (salient landmark recognition system) while the remainder are used to compute navigation strategy. As a result, the overall system performs all these computing tasks simultaneously in real-time at 10 frames per second. In short, with the new design and implementation of the highly-capable vision platform, we are able to apply computationally complex biologically-inspired vision algorithms on the mobile robot.
NSF, ARO, General Motors, and DARPA.