Published online by Cambridge University Press: 18 June 2014
Robotic surface planetary exploration is a challenging endeavor, with critical safety requirements and severe communication constraints. Autonomous navigation is one of the most crucial and yet risky aspects of these operations. Therefore, a certain level of local autonomy for onboard robots is an essential feature, so that they can make their own decisions independently of ground control, reducing operational costs and maximizing the scientific return of the mission. In addition, existing tools to support research in this domain are usually proprietary to space agencies, and out of reach of most researchers. This paper presents a framework developed to support research in this field, a modular onboard software architecture design and a series of algorithms that implement a visual-based autonomous navigation approach for robotic space exploration. It allows analysis of algorithms' performance and functional validation of approaches and autonomy strategies, data monitoring and the creation of simulation models to replicate the vehicle, sensors, terrain and operational conditions. The framework and algorithms are partly supported by open-source packages and tools. A set of experiments and field testing with a physical robot and hardware are described as well, detailing results and algorithms' processing time, which experience an incremented of one order of magnitude when executed in space-certified like hardware, with constrained resources, in comparison to using general purpose hardware.