Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-24T19:10:22.708Z Has data issue: false hasContentIssue false

Use of character information by autonomous robots based on character string detection in daily environments

Published online by Cambridge University Press:  15 August 2014

Kimitoshi Yamazaki*
Affiliation:
Faculty of Engineering, Shinshu University, 4-17-1 Wakasato, Nagano, Nagano, 380-8553, Japan
Tomohiro Nishino
Affiliation:
Graduate School of Systems and Information Engineering, The University of Tokyo, 7-3–1 Hongo, Bunkyo-ku, Tokyo, 305-8573, Japan
Kotaro Nagahama
Affiliation:
Graduate School of Systems and Information Engineering, The University of Tokyo, 7-3–1 Hongo, Bunkyo-ku, Tokyo, 305-8573, Japan
Kei Okada
Affiliation:
Graduate School of Systems and Information Engineering, The University of Tokyo, 7-3–1 Hongo, Bunkyo-ku, Tokyo, 305-8573, Japan
Masayuki Inaba
Affiliation:
Graduate School of Systems and Information Engineering, The University of Tokyo, 7-3–1 Hongo, Bunkyo-ku, Tokyo, 305-8573, Japan
*
*Corresponding author. E-mail: [email protected]

Summary

Characters encountered in daily environments provide valuable information to human beings and potentially to automated machines. This paper describes the use of character information by robots working in daily environments. Our robot system finds and reads character strings by virtue of a recognition module that detects character candidates as closed contours in an image. The closed-contour-based method enables detection of various character strings observed from different viewpoints. To cope with tiny and distant characters, the image processing is collaborated with a camera system having a mechanical gaze adjustment. By combining the detection result with an optical character reader, the autonomous robot can provide daily assistance to humans.

Type
Articles
Copyright
Copyright © Cambridge University Press 2014 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Case, C., Suresh, B., Coates, A. and Ng, A. Y., “Autonomous Sign Reading for Semantic Mapping,” Proceedings of International Conference on Robotics and Automation (2011) pp. 3297–3303.Google Scholar
2.Cai, M., Song, J. and Lyu, M. R., “A New Approach for Video Text Detection,” Proceedings of ICIP, vol. 1 (2002) pp. 117120.Google Scholar
3.Chen, H., Tsai, S. S., Schroth, G., Chen, D. M., Grzeszczuk, R. and Girod, B., “Robust Text Detection in Natural Images with Edge-Enhanced Maximally Stable Extremal Regions,” Proceedings of the International Conference on Image Processing (2011) pp. 2609–2612.Google Scholar
4.Clark, P. and Mirmehdi, M., “Finding Text Regions Using Localized Measures,” Proceedings of the 11th British Machine Vision Conference (2000) pp. 675–684.Google Scholar
5.Coates, A.et al., “Text Detection and Character Recognition in Scene Images with Unsupervised Feature Learning,” Proceedings of the International Conference on Document Analysis and Recognition (2011) pp. 440–445.Google Scholar
6.Epsthein, B., Ofek, E. and Wexler, Y., “Detecting Text in Natural Scenes with Stroke Width Transform Stroke Width Transform,” Proceedings of the International Conference on Computer Vision and Pattern Recognition (2010) pp. 2963–2970.Google Scholar
7.Jung, K., Kim, K. I. and Han, JungHyun, “Text Extraction in Real Scene Images on Planar Planes,” Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02), vol. 3 (2002) pp. 469472.Google Scholar
8.Haritaoglu, E. D. and Haritaoglu, I., “Real Time Image Enhancement and Segmentation for Sign/Text Detection,” Proceedings of the International Conference on Image Processing (2003) pp. 14–17.Google Scholar
9.Hanif, S. M. and Prevost, L.: “Text Detection and Localization in Complex Scene Images Using Constrained Adaboost Algorithm,” Proceedings of 10th International Conference on Document Analysis and Recognition (2009) pp. 1–5.Google Scholar
10.Ibraheem, B., Hamdy, A. and Darwish, N.: “Textual signs reading for indoor semantic map construction,” Proc. Int'l Conf. Comput. Appl., 53 (10), 3643 (2012).Google Scholar
11.Kabata, T., Watabe, H. and Kawaoka, T., “Extraction method of the character string area for signboard understanding,” SIG Technical Reports, No. 26, pp. 159–166 (2007).Google Scholar
12.Kim, S., Kim, D., Ryu, Y. and Kim, G., “A Robust License-Plate Extraction Method under Complex Image Conditions,” Proceedings of International Conference on Pattern Recognition, vol. 3 (2002) 216219.Google Scholar
13.Klingbeil, E., Carpenter, B., Russakovsky, O. and Ng, A. Y., “Autonomous Operation of Novel Elevators for Robot Navigation,” Proceedings of the IEEE International Conference on Robotics and Automation (2010) pp. 751–758.Google Scholar
14.Liang, J., Doermann, D. and Li, H., “Camera-based analysis of text and documents: a survey,” Int. J. Doc. Anal. Recognit. 7 (2), 84104 (2005).Google Scholar
15.Lienhart, R. and Stuber, F., “Automatic Text Recognition in Digital Videos,” Proceedings of SPIE, vol. 2666 (1996) pp. 180188.Google Scholar
16.Miene, A., Hermes, T. and Ioannidis, G., “Extracting Textual Inserts from Digitalvideos,” Proceedings of ICDAR (2001) pp. 1079–1083.Google Scholar
17.Minetto, R., Thome, N., Cord, M., Fabrizio, J. and Marcotegui, B., “Snoopertext: A Multiresolution System for Text Detection in Complex Visual Scenes,” Proceeedings of 2010 17th IEEE International Conference on Image Processing (ICIP) (2010) pp. 3861–3864.Google Scholar
18.Miura, J., Tsuyoshi, K and Shirai, Y., “An Active Vision System for Real-Time Traffic Sign Recognition,” Proceedings of IEEE International Conference on Intelligent Transportation System (2000).Google Scholar
19.Nagahama, K., Nishino, T., Kojima, M., Yamazaki, K., Okada, K. and Inaba, M., “End Point Tracking for a Moving Object with Several Attention Regions by Composite Vision System,” Proceedings of IEEE International Conference on Mechatronics and Automation (2011) pp. 590–596.Google Scholar
20.Neumann, L. and Matas, J.: “Scene Text Localization and Recognition with Oriented Stroke Detection,” Proceedings of the International Conference on Computer Vision (2013) pp. 97–104.Google Scholar
21.Nguyen, P. G. and Andersen, H. J., “A Fast Sign Localization System Using Discriminative Color Invariant Segmentation,” Proceedings of the International Conference on Pattern Recognition and Computer Vision (2008).Google Scholar
22.Nishino, T., Yamazaki, K., Okada, K. and Inaba, M.: “Extraction of Character String Regions from Scenery Images Based on Contours and Thickness of Characters,” Proceedings on IAPR Conference on Machien Vision Applications (2011) pp. 307–310.Google Scholar
23.Pan, Y.-F., Hou, X. and Liu, C.-L., “Text Localization in Natural Scene Images Based on Conditional Random Field,” Proceedings of the 10th International Conference on Document Analysis and Recognition (2009) pp. 6–10.Google Scholar
24.Papineni, K., Roukos, S., Ward, T. and Zhu, W., “Bleu: A Method for Automatic Evaluation of Machine Translation,” Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (2002) pp. 311–318.Google Scholar
25.Phan, T. Q., Shivakumara, P., Su, B. and Tan, C. L., “Gradient Vector Flow-Based Method for Video Character Segmentation,” Proceedings of the 2011 International Conference on Document Analysis and Recognition (2011) pp. 1024–1028.Google Scholar
26.Reina, A. V., Sastre, R. J., Arroyo, S. L. and Jimenez, P. G.: “Adaptive Traffic Road Sign Panels Text Extraction,” Proceedings of the 5th WSEAS International Conference on Signal Processing, Robotics and Automation (2006) pp. 295–300.Google Scholar
27.Ryu, E., Onishi, N., Yamamura, T. and Sugie, N., “Extraction of character series regions in scenes,” Trans. Inst. Electron. Inf. Commun. Eng. 10 (3), 2326 (1999).Google Scholar
28.Takahashi, H., Kasai, K., Kim, D. and Nakajima, M., “Extraction of text regions from scenery images using multiple features,” ITE Techn. Rep. 25 (11), 3944 (2001).Google Scholar
29.Trevor, A. J., Rogers, J. III, Nieto-Granda, C. and Christensen, H. I., “Feature-based Mapping with Grounded Landmark and Place Labels,” Workshop on Grounding Human-Robot Dialog for Spatial Tasks. Robotics, Science and Systems (2011).Google Scholar
30.Zhao, M., Li, S. and Kwok, J.: “Text detection in images using sparse representation with discriminative dictionaries,” J. Image Vis. Comput. 28 (12), 15901599 (2010).Google Scholar
31.Lucas, S. M., Panaretos, A., Sosa, L., Tang, A., Wong, S. and Young, R., “Icdar 2003 Robust Reading Competitions,” ICDAR'03 (2003) pp. 682–687.Google Scholar
32.Lucas, S. M., “Icdar 2005 Text Locating Competition Results,” Proceedings of the Eighth International Conference on Document Analysis and Recognition, vol. 1 (2005), pp. 8084.Google Scholar
33.Karatzas, D., “ICDAR 2011 Robust Reading Competition - Challenge 1: Reading Text in Born-Digital Images,” Proc. of Document Analysis and Recognition (2011) pp. 1485–1490.Google Scholar