Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-25T04:25:48.654Z Has data issue: false hasContentIssue false

A Closed Form 3D Self-Positioning algorithm for a mobile robot using vision and guide-marks

Published online by Cambridge University Press:  09 March 2009

Zeungnam Bien
Affiliation:
Department of Electrical Engineering, KAIST, P.O. Box 150, Cheongryangni, Seoul 130-650, (Korea).
Ho Yeol Kwon
Affiliation:
Department of Electrical Engineering, KAIST, P.O. Box 150, Cheongryangni, Seoul 130-650, (Korea).
Jeongnam Youn
Affiliation:
Department of Electrical Engineering, KAIST, P.O. Box 150, Cheongryangni, Seoul 130-650, (Korea).
Il Hong Suh
Affiliation:
Department of Electronic Engineering, Hanyang University, Seoul, (Korea)

Summary

In this paper, the 3D self-positioning problem of a mobile robot is investigated under the assumption that there are given a set of guide points along with camera vision as the detection mechanism. The minimal number of guide points is discussed to determine the position and orientation of a mobile robot via a single or multiple camera system. For practical application, a closed form 3D self-positioning algorithm is proposed using a stereo camera system with triple guide points. It is further shown that a double triangular pattern is an effective guide-mark that is robust against measurement noise in feature extraction. Then, by simulation, the sensitivity of positioning errors due to image errors are analyzed. It is experimentally shown that the proposed method with triple guide points works well for a walking robot equipped with a stereo camera in laboratory environment.

Type
Article
Copyright
Copyright © Cambridge University Press 1991

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Tsumura, T., “Survey of Automated Guided Vehicle in Japanese Factory” Proc. IEEE Int. Conf, on Robotics and Automation, San Francisco, Cal. 13291334 (1986).Google Scholar
2.Fukui, I., “TV Image Processing to Determine the Position of a Robot VehiclePattern Recognition 14, 101109 (1981).CrossRefGoogle Scholar
3.Maghee, M.J. and Aggarwal, J.K., “Determining the Position of a Robot using a Single Calibration Object” Proc. IEEE Int. Conf. on Robotics and Automation, Atlanta, Ga. 122129 (1984).Google Scholar
4.Hung, Y., Yeh, P.-S., and Harwood, D., “Passive Ranging to Known Planar Point set” Proc. IEEE Int. Conf. on Robotics and Automation, St. Louis, Mo. 8085 (1985).Google Scholar
5.Kabuka, M.R. and Arenas, A.E., “Position Verification of a Mobile Robot using Standard Pattern IEEE J. of Robotics and Autom. RA-3, No. 6, 505516 (1987).CrossRefGoogle Scholar
6.Teraya, K. and Kasuya, C., “Determination of the Position of a Freely Moving Robot Vehicle by means of Sensing a Mark Pattern” Proc. Int. Symp. on Robotic Systems, (Japan) 220223 (1989).CrossRefGoogle Scholar
7.Kwon, H.Y., Bien, Z., Cho, O.K., and Sun, I.H., “An Effective Navigation System Using Vision and Guide-Marks for a Mobile Robot” Proc. 11th IFAC World Congress, 08 13–17, 1990, Tallin, USSR, 9, 109114 (1990).Google Scholar
8.Yuan, J.S.-C., “A General Photometric Method for Determining Object Position and Orientation IEEE Trans, on Robotics and Autom. 5, 129142 (1989).Google Scholar
9.Craig, J.J., Introduction to Robotics: Mechanics and Control. (Addison-Wesley, Tokyo, 1986).Google Scholar
10.Freeman, H., “Computer Processing of Line-Drawing ImagesACM Computing Surveys 6, No. 1, 5797 (1974).CrossRefGoogle Scholar
11.Rosenfeld, A. and Johnston, E., “Angle detection on Digital CurvesIEEE Trans, on Comput. C-22, No. 9, 875878, (1973).Google Scholar
12.Martins, H.A., Birk, J.R., and Kelly, R.B., “Camera Models Based on Data from Two Calibration PlaneComputer Graphics and Image Processing 17, 173180 (1981).CrossRefGoogle Scholar