Published online by Cambridge University Press: 01 June 2017
An accurate 3D model of an outdoor scene can be used in many different scenarios of precision agriculture, for instance to analyse the silhouette of a tree crown canopy for precision spraying, to count fruit for fruit yield prediction or to simply navigate a vehicle between the plant rows. Instead of using stereovision, limited by the problems of different light intensities, or by using expensive multi-channel 3D range finder (LIDAR scanner), limited by the number of channels, this work investigates the possibility of using two single channel LIDAR scanners mounted on a small robot to allow a real-time 3D object reconstruction of the robot environment. The approach used readings captured by two LIDAR scanners, SICK LMS111 and SICK TiM310, where the first one was scanning horizontally and the second one vertically. In order to correctly map the 3D points of the readings from the vertical sensor into a 3D space, a custom SLAM algorithm based on image registration techniques was used to calculate the new positions of the robot. The approach was tested in an indoor and outdoor environment, proving its accuracy with an error rate of 0.02 m±0.02 m for vertical and −0.01 m±0.13 m for the horizontal plane.