Published online by Cambridge University Press: 01 March 1997
A laser range finder mounted on a site and azimuth turret is used as a 3D range camera. It forms, associated with a video camera, an original stereovision system. The internal structure of both images are the same but the resolution of 3D image stays low. By ignoring the acquiring speed of measures, spatial resolution is limited by the accuracy of deviation device and the laser footprint. The fact that the impact of the beam is not a point introduces spatial integration.
To correct the average at depth discontinuities due to the beam footprint, a neural-network-based solution is reported.
The use of such a multisensor system requires its calibration. As camera calibration is a well-known problem, the paper focuses on models and calibration methods of the range finder. Experimental results illustrate the quality of the calibration step in terms of accuracy and stability.
The footprint correction is evaluated for both 1D and 2D range finder scannings.