Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-09T01:34:15.500Z Has data issue: false hasContentIssue false

Self-supervised free space estimation in outdoor terrain

Published online by Cambridge University Press:  01 June 2018

Ali Harakeh
Affiliation:
VRL Lab, Department of Mechanical Engineering, American University of Beirut, Riad El-Solh, 1107 2020 Beirut, Lebanon. E-mails: [email protected], [email protected]
Daniel Asmar*
Affiliation:
VRL Lab, Department of Mechanical Engineering, American University of Beirut, Riad El-Solh, 1107 2020 Beirut, Lebanon. E-mails: [email protected], [email protected]
Elie Shammas
Affiliation:
VRL Lab, Department of Mechanical Engineering, American University of Beirut, Riad El-Solh, 1107 2020 Beirut, Lebanon. E-mails: [email protected], [email protected]
*
*Corresponding author. E-mail: [email protected]

Summary

The ability to reliably estimate free space is an essential requirement for efficient and safe robot navigation. This paper presents a novel system, built upon a stochastic framework, which estimates free space quickly from stereo data, using self-supervised learning. The system relies on geometric data in the close range of the robot to train a second-stage appearance-based classifier for long range areas in a scene. Experiments are conducted on board an unmanned ground vehicle, and the results demonstrate the advantages of the proposed technique over other self-supervised systems.

Type
Articles
Copyright
Copyright © Cambridge University Press 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Hedau, V., Hoiem, D. and Forsyth, D., “Recovering Free Space of Indoor Scenes from a Single Image,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2012) pp. 2807–2814.Google Scholar
2. Labayrade, R., Aubert, D. and Tarel, J. P., “Real Time Obstacle Detection in Stereovision on Non Flat Road Geometry Through “v-Disparity” Representation,” Proceedings of the IEEE Intelligent Vehicle Symposium, vol. 2, (2002) pp. 646–651.Google Scholar
3. Maturana, D., Chou, P. W., Uenoyama, M. and Scherer, S., “Real-time semantic mapping for autonomous off-road navigation,” In: Field and Service Robotics, Springer, Cham. (2018) pp. 335350.Google Scholar
4. Suger, B., Steder, B., and Burgard, W., “Traversability analysis for mobile robots in outdoor environments: A semi-supervised learning approach based on 3D-lidar data,” In: Robotics and Automation (ICRA), 2015 IEEE International Conference on (IEEE, May 2015) pp. 39413946.Google Scholar
5. Dahlkamp, H. et al., “Self-Supervised Monocular Road Detection in Desert Terrain,” Proceedings of Robotics: Science and Systems, Philadelphia.Google Scholar
6. Thrun, S., Montemerlo, M. and Aron, A., “Probabilistic Terrain Analysis for High-Speed Desert Driving,” Proceedings of Robotics: Science and Systems, pp. 16–19.Google Scholar
7. Milella, A. et al., “Visual ground segmentation by radar supervision,” Robot. Auton. Syst. 62 (5), 696706 (2014).Google Scholar
8. Milella, A., Reina, G. and Foglia, M. M., “A Multi-Baseline Stereo System for Scene Segmentation in Natural Environments,” Proceedings of the IEEE International Conference Technologies for Practical Robot Applications (TePRA), (2013) pp. 1–6.Google Scholar
9. Reina, G. and Milella, A., “Towards autonomous agriculture: Automatic ground detection using trinocular stereovision,” Sensors 12 (9), 1240512423 (2012).Google Scholar
10. Howard, A. et al., “Towards learned traversability for robot navigation: From underfoot to the far field,” J. Field Robot. 23 (11–12), 10051017 (2006).Google Scholar
11. Kim, D., Oh, S. M. and Rehg, J. M., “Traversability Classification for UGV Navigation: A Comparison of Patch and Superpixel Representations,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2007) pp. 3166–3173.Google Scholar
12. Vernaza, P., Taskar, B. and Lee, D. D., “Online, Self-Supervised Terrain Classification Via Discriminatively Trained Submodular Markov Random Fields,” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), (2008) pp. 2750–2757.Google Scholar
13. Hadsell, R. et al., “Learning long-range vision for autonomous off-road driving,” J. Field Robot. 26 (2), 120144 (2009).Google Scholar
14. Moghadam, P. and Wijesoma, W. S., “Online, Self-Supervised Vision-Based Terrain Classification in Unstructured Environments,” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC), (2009) pp. 3100–3105.Google Scholar
15. Kostavelis, I., Nalpantidis, L. and Gasteratos, A., “Collision risk assessment for autonomous robots by offline traversability learning,” Robot. Auton. Syst. 60 (11), 13671376 (2012).Google Scholar
16. Reina, G., Milella, A. and Worst, R., “Lidar and stereo combination for traversability assessment of off-road robotic vehicles,” Robotica 34 (12), 28232841 (2016).Google Scholar
17. Bajracharya, M. et al., “Autonomous off-road navigation with end-to-end learning for the LAGR program,” J. Field Robot. 26 (1), 325 (2009).Google Scholar
18. Brooks, C. A. and Iagnemma, K., “Self-supervised terrain classification for planetary surface exploration rovers,” J. Field Robot. 29 (3), 445468 (2012).Google Scholar
19. Wurm, K. M. et al., “Identifying vegetation from laser data in structured outdoor environments,” Robot. Auton. Syst. 62 (5), 675684 (2014).Google Scholar
20. Hu, Z. and Uchimura, K., “Uv-Disparity: An Efficient Algorithm for Stereovision Based Scene Analysis,” Proceedings of the IEEE Intelligent Vehicles Symposium, (2005) pp. 48–54.Google Scholar
21. Harakeh, A., Asmar, D. and Shammas, E., “Ground Segmentation and Occupancy Grid Generation Using Probability Fields,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2015) pp. 695–702.Google Scholar
22. Harakeh, A., Asmar, D. and Shammas, E., “Identifying good training data for self-supervised free space estimation.” In: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on (IEEE, June 2016) pp. 35303538.Google Scholar
23. Yiruo, D., Wenjia, W. and Yukihiro, K., “Complex Ground Plane Detection Based on v-Disparity Map in Off-Road Environment,” Proceedings of the IEEE Intelligent Vehicles Symposium (IV), (2013) pp. 1137–1142.Google Scholar
24. Zhao, J., Katupitiya, J. and Ward, J., “Global Correlation Based Ground Plane Estimation Using v-Disparity Image,” Proceedings of the IEEE International Conference on Robotics and Automation, (2007) pp. 529–534.Google Scholar
25. Bishop, C. M., Pattern Recognition and Machine Learning (Springer, New York, 2006).Google Scholar
26. Denis, F. et al., “Text Classification and Co-Training from Positive and Unlabeled Examples,” Proceedings of the ICML Workshop: The Continuum from Labeled to Unlabeled Data (2003) pp. 80–87.Google Scholar
27. He, J., Zhang, Y., Li, X. and Wang, Y., “Naive bayes classifier for positive unlabeled learning with uncertainty,” In: Proceedings of the 2010 SIAM International Conference on Data Mining (Society for Industrial and Applied Mathematics, April 2010) pp. 361–372.Google Scholar
28. Scholkopf, B., Williamson, R. C., Smola, A. J. and Shawe-Taylor, J., “SV Estimation of a Distribution's Support,” Proc. 14th Neural Information Processing Systems (NIPS '00), (2000) pp. 582–588.Google Scholar
29. Stereo Labs., Zed camera. Available at: https://www.stereolabs.com/ (2017).Google Scholar