Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-26T16:46:50.880Z Has data issue: false hasContentIssue false

Multi-waypoint visual homing in piecewise linear trajectory

Published online by Cambridge University Press:  16 August 2012

Yu Fu
Affiliation:
National Taiwan University of Science and Technology, Electrical Engineering, Taiwan
Tien-Ruey Hsiang*
Affiliation:
Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taiwan
Sheng-Luen Chung
Affiliation:
National Taiwan University of Science and Technology, Electrical Engineering, Taiwan
*
*Corresponding author. E-mail: [email protected]

Summary

This paper proposes an image sequence-based navigation method under the teaching-replay framework for robots in piecewise linear routes. Waypoints used by the robot contain either the positions with large heading changes or selected midway positions between junctions. The robot applies local visual homing to move between consecutive waypoints. The arrival at a waypoint is determined by minimizing the average vertical displacements of feature correspondences. The performance of the proposed approach is supported by extensive experiments in hallway and office environments. While the homing speed of robots using other approaches is constrained by the speed in the teaching phase, our robot is not bounded by such limit and can travel much faster without compromising the homing accuracy.

Type
Articles
Copyright
Copyright © Cambridge University Press 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Bonin-Font, F., Ortiz, A. and Oliver, G., “Visual navigation for mobile robots: A survey,” J. Intell. Robot. Syst. 53 (3), 263296 (2008).CrossRefGoogle Scholar
2.Corke, P., Visual Control of Robots: High-Performance Visual Servoing (Research Studies Press Ltd, 1996).Google Scholar
3.Hutchinson, S., Hager, G. and Corke, P., “A tutorial on visual servo control,” IEEE Trans. Robot. Autom. 12, 651670 (1996).CrossRefGoogle Scholar
4.Matsumoto, Y., Inaba, M. and Inoue, H., “Visual Navigation Using View-Sequence Route Representation,” Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA (Apr. 22–28, 1996) vol. 1, pp. 8388.CrossRefGoogle Scholar
5.Argyros, A. A., Bekris, K. E., Orphanoudakis, S. C. and Kavraki, L. E., “Robot homing by exploiting panoramic vision,” Auton. Robots 19 (1), 725 (2005).CrossRefGoogle Scholar
6.Booji, O., Terwijn, B., Zivkovic, Z. and Krose, B., “Navigation Using an Appearance-Based Topological Map,” Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy (Apr. 10–14, 2007) pp. 39273932.Google Scholar
7.Chen, Z. and Birchfield, S. T., “Qualitative vision-based path following,” IEEE Trans. Robot. 25 (3), 749754 (2009).CrossRefGoogle Scholar
8.Cherubini, A., Colafrancesco, M., Oriolo, G., Freda, L. and Chaumette, F., “Comparing Appearance-Based Controllers for Nonholonomic Navigation from a Visual Memory,” Proceedings of the ICRA 2009 Workshop on Safe Navigation in Open and Dynamic Environments: Application to Autonomous Vehicles, Kobe, Japan (May 12, 2009).Google Scholar
9.Courbon, J., Mezouar, Y. and Martinet, P., “Indoor navigation of a non-holonomic mobile robot using a visual memory,” Auton. Robots 25 (3), 253266 (2008).CrossRefGoogle Scholar
10.Courbon, J., Mezouar, Y. and Martinet, P., “Autonomous navigation of vehicles from a visual memory using a generic camera model,” IEEE Trans. Intell. Transp. Syst. 10 (3), 392402 (2009).CrossRefGoogle Scholar
11.Erinc, G. and Carpin, S., “Image-Based Mapping and Navigation with Heterogenous Robots,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems, St. Louis, MO, USA (Oct. 11–15, 2009) pp. 58075814.Google Scholar
12.Fontanelli, D., Danesi, A., Belo, F. A. W., Salaris, P. and Bicchi, A., “Visual servoing in the large,” The Int. J. Robot. Res. 28 (6), 802814 (2009).CrossRefGoogle Scholar
13.Fraundorfer, F., Engels, C. and Nister, D., “Topological Mapping, Localization and Navigation using Image Collections,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems, San Diego, CA, USA (Oct. 29–Nov. 2, 2007) pp. 38723877.Google Scholar
14.Fu, Y., Hsiang, T.-R. and Chung, S.-L., “Robot Navigation Using Image Sequences,” Proceedings of the 6th International Conference on Ubiquitous Robots and Ambient Intelligence, Gwangju, Korea (Oct. 29–31, 2009) pp. 163167.Google Scholar
15.Goedeme, T., Nuttin, M., Tuytelaars, T. and Gool, L. Van, “Omnidirectional vision based topological navigation,” Int. J. Comput. Vis. 74 (3), 219236 (2007).CrossRefGoogle Scholar
16.Ido, J., Shimizu, Y., Matsumoto, Y. and Ogasawara, T., “Indoor navigation for a humanoid robot using a view sequence,” The Int. J. Robot. Res. 28 (2), 315325 (2009).CrossRefGoogle Scholar
17.Remazeilles, A. and Chaumette, F., “Image-based robot navigation from an image memory,” Robot. Auton. Syst. 55 (4), 345356 (2007).CrossRefGoogle Scholar
18.Royer, E., Lhuillier, M., Dhome, M. and Lavest, J.-M., “Monocular vision for mobile robot localization and autonomous navigation,” Int. J. Comput. Vis. 74 (3), 237260 (2007).CrossRefGoogle Scholar
19.Segvic, S., Remazeilles, A., Diosi, A. and Chaumette, F., “Large Scale Vision-Based Navigation Without an Accurate Global Reconstruction,” Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA (Jun. 18–23, 2007) vol. 0, pp. 18.Google Scholar
20.Vardy, A., “Long-Range Visual Homing,” Proceedings of the IEEE International Conference on Robotics and Biomimetics, Kunming, China (Dec. 17–20, 2006) pp. 220226.Google Scholar
21.Zhang, A. M. and Kleeman, L., “Robust appearance based visual route following for navigation in large-scale outdoor environments,” Int. J. Robot. Res. 28 (3), 331356 (2009).CrossRefGoogle Scholar
22.Becerra, H. M. and Sagues, C., “A Sliding Mode Control Law for Epipolar Visual Servoing of Differential-Drive Robots,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Nice, France (Sep. 22–26, 2008) pp. 30583063.Google Scholar
23.Becerra, H. M., Lopez-Nicolas, G. and Sagues, C., “Omnidirectional visual control of mobile robots based on the 1d trifocal tensor,” Robot. Auton. Syst. 58 (6), 796808 (2010).CrossRefGoogle Scholar
24.Liu, M., Pradalier, C., Chen, Q. and Siegwart, R., “A Bearing-Only 2d/3d-Homing Method Under a Visual Servoing Framework,” Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA (May 3–7, 2010).Google Scholar
25.Loizou, S. G. and Kumar, V., “Biologically Inspired Bearing-Only Navigation and Tracking,” Proceedings of the IEEE International Conference on Decision and Control, London, UK (Apr. 15–17, 2007) pp. 13861391.Google Scholar
26.Lopez-Nicolas, , Sagues, C., Guerrero, J. J., Kragic, D. and Jensfelt, P., “Switching visual control based on epipoles for mobile robots,” Robot. Auton. Syst. 56 (7), 592603 (2008).CrossRefGoogle Scholar
27.Lopez-Nicolas, G., Gans, N. R., Bhattacharya, S., Sagues, C., Guerrero, J. J., and Hutchinson, S., “Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints,” IEEE Trans. Syst. Man Cybern. Part B: Cybern. 99, 113 (2009).Google Scholar
28.Lopez-Nicolas, G., Guerrero, J. J. and Sagues, C., “Visual control of vehicles using two-view geometry,” Mechatronics 20 (2), 315325 (2010).CrossRefGoogle Scholar
29.Lopez-Nicolas, G., Guerrero, J. J. and Sagues, C., “Visual control through the trifocal tensor for nonholonomic robots,” Robot. Auton. Syst. 58 (2), 216226 (2010).CrossRefGoogle Scholar
30.Lopez-Nicolas, G., Guerrero, J. J. and Sagues, C., “Multiple homographies with omnidirectional vision for robot homing,” Robot. Auton. Syst. 58 (6), 773783 (2010).CrossRefGoogle Scholar
31.Mariottini, G. L., Oriolo, G. and Prattichizzo, D., “Image-based visual servoing for nonholonomic mobile robots using epipolar geometry,” IEEE Trans. Robot. 23 (1), 87100 (2007).CrossRefGoogle Scholar
32.Moller, R., “Local visual homing by warping of two-dimensional images,” Robot. Auton. Syst. 57 (1), 87101 (2009).CrossRefGoogle Scholar
33.Sagues, C. and Guerrero, J. J., “Visual correction for mobile robot homing,” Robot. Auton. Syst. 50, 4149 (2005).CrossRefGoogle Scholar
34.Lowe, D. G., “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60 (2), 91110 (2004).CrossRefGoogle Scholar
35.Bay, H., Tuytelaars, T. and Van Gool, L., “Surf: Speeded up Robust Features,” Proceedings of the 9th European Conference on Computer Vision, Graz, Austria (May 7–13, 2006) vol. 3951, pp. 404417.Google Scholar
36.Chen, Z. and Birchfield, S. T., “Qualitative Vision-Based Mobile Robot Navigation,” Proceedings of the IEEE International Conference on Robotics and Automation, Orlando, FL, USA (May 15–19, 2006) pp. 26862692.Google Scholar
37.Hartley, R. I. and Zisserman, A., Multiple View Geometry in Computer Vision, 2nd ed. (2004). Cambridge University Press, ISBN: 0521540518.CrossRefGoogle Scholar
38.Fischler, M. A. and Bolles, R. C., “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24 (6), 381395 (1981).CrossRefGoogle Scholar
39. “U-bot vendor,” From the World Wide Web: http://www.atechsystem.com.tw/.Google Scholar
40. “Mechanical and systems research laboratories in industrial technology research institute,” From the World Wide Web: http://www.itri.org.tw/eng/MSL/.Google Scholar
41. “Logitech,” From the World Wide Web: http://www.logitech.com/en-us/435/238.Google Scholar
42.Bradski, G., “The opencv library,” Dr. Dobb's J. Softw. Tools 2000.Google Scholar
43.Klein, G. and Murray, D., “Parallel Tracking and Mapping for Small AR Workspaces,” Proceedings of the Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan (Nov. 13–16, 2007).Google Scholar
44.Kwon, J., Choi, M., Park, F. C., and Chun, C., “Particle filtering on the euclidean group: Framework and applications,” Robotica 25, 725737 (2007).CrossRefGoogle Scholar
45.Park, W., Liu, Y., Zhou, Y., Moses, M. and Chirikjian, G., “Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map,” Robotica 26, 419434 (2008).CrossRefGoogle ScholarPubMed