Published online by Cambridge University Press: 09 March 2009
A new vision system architecture has been developed to support the visual navigation of an autonomous mobile robot. This robot is primarily intended for urban park inspection, so it should be able to move in a complex unstructured environment. The system consists of various modules each ensuring a specific task involved in autonomous navigation. Task coordination focuses on the central module called the supervisor which triggers each module at the time appropriate to the current situation of the robot. Most of the processing time is spent with the scene exploration module which is based on the Hough transform to extract the dominant straight features. This module operates in two modes: the initial phase which forms the type of processing applied to the first image acquired in order to initiate navigation, and the continuous following mode which ensures the processing of subsequent images taken at the end of the blind distance. In order to rely less on visual data, a detailed map of the environment has been established, and an algorithm is used to make a scene prediction based on robot position provided by the localization system. The predicted scene is used to validate the objects detected by the knowledge base. This knowledge base uses the acquired and predicted data to construct a scene model which is the main element of the vision system.