Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-18T21:16:51.096Z Has data issue: false hasContentIssue false

Calibration of Multiple Depth Sensor Network Using Reflective Pattern on Spheres: Theory and Experiments

Published online by Cambridge University Press:  21 September 2020

Nasreen Mohsin*
Affiliation:
Networked Robotics and Sensing Laboratory, School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada E-mail: [email protected]
Shahram Payandeh
Affiliation:
Networked Robotics and Sensing Laboratory, School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada E-mail: [email protected]
*
*Corresponding author. E-mail: [email protected]

Summary

The usage of depth data from time-of-flight or any equivalent devices overcomes visual challenges presented under low illumination. For integrating such data from multiple sources, the paper proposes a novel tool for the calibration of sensors. The proposed tool consists of retro-reflective stripped spheres. To correctly estimate these spheres, the paper investigates the performance of spherical estimations from either infrared or depth data. The relationship between sensors is determined by calculating the poses between the calibration tool and each sensor. The paper evaluates and compares the proposed approach against other state-of-the-art approaches in terms of shape reconstruction and spatial consistency.

Type
Article
Copyright
© The Author(s), 2020. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Sripama, S., Ganguly, B. and Konar, A., “Gesture Based Improved Human-Computer Interaction Using Microsoft’s Kinect Sensor,” Proceedings of International Conference on Microelectronics, Computing and Communications (MicroCom) (2016) pp. 16.Google Scholar
Shotton, J., Toby, S., Kipman, A., Fitzgibbon, A., Finocchio, M. B. A., Cook, M. and Moore, R., “Real-time human pose recognition in parts from single depth images,” Commun. ACM 1(56), 116124 (2013).CrossRefGoogle Scholar
Cai, Q., Gallup, D., Zhang, C. and Zhang, Z., “3D Deformable Face Tracking with a Commodity Depth Camera,” Proceedings of European Conference on Computer Vision (2010) pp. 229242.Google Scholar
Endres, F., Hess, J., Sturm, J., Cremers, D. and Burgard, W., “3-D mapping with an RGB-D camera,” IEEE Trans. Robot. 30(1), 177187 (2013).CrossRefGoogle Scholar
Rasoulidanesh, M. S., Mohsin, N. and Payandeh, S., “Distributed Robots Localization and Cooperative Target Tracking,” Proceedings of 8th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (2017) pp. 464470.Google Scholar
Zhang, Z., “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 11(22), 13301334 (2000).CrossRefGoogle Scholar
Tsai, R., “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323344 (1987).CrossRefGoogle Scholar
Lu, Y. and Payandeh, S., “On sensitivity analysis of camera calibration using images of spheres,” J. Comput. Vis. Image Understanding 114(1), 820 (2010).CrossRefGoogle Scholar
Dai, X. and Payandeh, S., “Geometry-based object association and consistent labeling in multi-camera surveillance,” IEEE J. Emerging Sel. Top. Circ. Syst. 3(2), 175184 (2013).CrossRefGoogle Scholar
Krumm, J., Harris, S., Meyers, B., Brumitt, B., Hale, M. and Shafer, S., “Multi-camera Multi-person Tracking for EasyLiving,” Proceedings of Third IEEE International Workshop on Visual Surveillance (2000) pp. 310.Google Scholar
Wang, and Payandeh, , “A Study of Hand Motion/Posture Recognition in Two-Camera Views,” International Symposium on Visual Computing (2015) pp. 314323.Google Scholar
Gao, Z., Yu, Y., Zhou, Y. and Du, S., “Leveraging two kinect sensors for accurate full-body motion capture,” Sensors 9(15), 2429724317 (2015).CrossRefGoogle Scholar
Lu, Y. and Payandeh, S., “Dumbbell Calibration for a Multi-camera Tracking System,” Proceedings of Canadian Conference on Electrical and Computer Engineering (2007) pp. 14721475.Google Scholar
Heikkila, J. and Silvén, O., “A Four-Step Camera Calibration Procedure with Implicit Image Correction,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1997) pp. 1106.Google Scholar
Motta, J. M. S. T., De Carvalho, G. C. and McMaster, R. S., “Robot calibration using a 3D vision-based measurement system with a single camera,” Robot. Comput. Integr. Manuf. 17(6), 487497 (2001).Google Scholar
Maimone, A., Bidwell, J., Peng, K. and Fuchs, H., “Enhanced personal autostereoscopic telepresence system using commodity depth cameras,” Comput. Graphics 7(36), 791807 (2012).Google Scholar
Wendelin, K.-A., Combining Multiple Depth Cameras for Reconstruction Master’s Thesis (Institut für Softwaretechnik und Interaktive Systeme, Technische Universitaí Wein, Vienna, 2012).Google Scholar
Berger, K., Ruhl, K., Schroeder, Y., Bruemmer, C., Scholz, A. and Magnor, M., “Markerless Motion Capture Using Multiple Color-Depth Sensors,” Proceedings of the International Workshop on Vision, Modeling, and Visualization (2011) pp. 317324.Google Scholar
Chen, Y. and Medioni, G., “Object modelling by registration of multiple range images,” Image Vis. Comput. 10(3), 145155 (1992).CrossRefGoogle Scholar
Besl, P. J. and McKay, N. D., “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239256 (1992).CrossRefGoogle Scholar
Chetverikov, D., Stepanov, D. and Krsek, P., “Robust Euclidean alignment of 3D point sets: The trimmed iterative closest point algorithm,” Image Vis. Comput. 23(3), 299309 (2005).CrossRefGoogle Scholar
Lei, H., Jiang, G. and Quan, L., “Fast descriptors and correspondence propagation for robust global point cloud registration,” IEEE Trans. Process, Image . 26(8), 36143623 (2017).CrossRefGoogle Scholar
Auvinet, E., Meunier, J. and Multon, F., “Multiple Depth Cameras Calibration and Body Volume,” Proceedings of 11th International Conference on Information Science, Signal Processing and their Applications (2012) pp. 478483.Google Scholar
Guan, L. and Pollefeys, M., “A Unified Approach to Calibrate a Network of Camcorders and Tof Cameras,” Proceedings of Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications-M2SFA (2008).Google Scholar
Ruan, M. and Huber, D., “Calibration of 3D Sensors Using a Spherical Target,” Proceedings of 2nd International Conference on 3D Vision (2014) pp. 187193.Google Scholar
Fornaser, A., Tomasin, P., Cecco, M. D., Tavernini, M. and Zanetti, M., “Automatic graph based spatiotemporal extrinsic calibration of multiple Kinect V2 ToF cameras,” Robot. Auto. Syst. 98, 105125 (2017). https://www.sciencedirect.com/science/article/abs/pii/S0921889016308235.CrossRefGoogle Scholar
Loyd, J., “A brief history of retroreflective sign face sheet materials,” The Retroreflective Equipment Manufacturers Association: Lancashire, UK (2008). Available: http://www.rema.org.uk/pub/pdf/history-retroreflective-materials.pdf. [Accessed 10 December 2018].Google Scholar
Harris, C. and Stephens, M., “A Combined Corner and Edge Detector,” Alvey Vision Conference, vol. 15(50) (1988) pp. 105244.Google Scholar
Torr, P. H. and Zisserman, A., “MLESAC: A new robust estimator with application to estimating image geometry,” Comput. Vis. Image Understanding 78(1), 138156 (2000).CrossRefGoogle Scholar
Luk’acs, G., Marshall, A. D. and Martino, R. R., “Geometric least-squares fitting of spheres, cylinders, cones and tori,” Budapest: RECCAD, Deliverable Document 2 and 3, COPERNICUS project, No 1068, Geometric Modelling Laboratory Studies/1997/5, Computer and Automation Research (1997).Google Scholar
Pagliari, D. and Pinto, L., “Calibration of kinect for xbox one and comparison between the two generations of microsoft sensors,” Sensors 15(11), 2756927589 (2015).CrossRefGoogle ScholarPubMed
Blake, J., Echtler, F. and Kerl, C., libfreenect2: Open source drivers for the Kinect for Windows v2 device (2015).Google Scholar
MartíNez, S and Bullo, F., “Optimal sensor placement and motion coordination for target tracking,” Automatica 42(4), 661668, (2006).CrossRefGoogle Scholar
Lachat, E., Macher, H., Mittet, M., Landes, T. and Grussenmeyer, P., “First experiences with Kinect v2 sensor for close range 3D modelling,” Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci. 40(5), 93 (2015).CrossRefGoogle Scholar
Bolles, R. and Fischler, M. A., “A Ransac-Basedapproach to Model fitting and Its Application to finding Cylinders in Range Data,” Proceedings of the 7th International Joint Conference on Artificial Intelligence (1981) pp. 637643.Google Scholar
Bouguet, J.-Y., Camera calibration toolbox for matlab (2004).Google Scholar
Bradski, G., “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).Google Scholar
Marschner, S., Westin, S., Lafortune, E. and Torrance, K., “Image-based bidirectional reflectance distribution function measurement,” Appl. Optics 16(39), 25922600 (2000).CrossRefGoogle Scholar
Kendall, D. G., “A survey of the statistical theory of shape,” Stat. Sci. 4(2), 8789 (1989).Google Scholar
Triggs, B., Mclauchlan, P., Hartley, R. and Fitzgibbon, A., “Bundle Adjustment – A Modern Synthesis,” In: Vision Algorithms: Theory And Practice (Springer Verlag, 2000) pp. 298375.CrossRefGoogle Scholar
Pomerleau, F., Colas, F. and Siegwart, R., “A review of point cloud registration algorithms for mobile robotics,” Found. Trends Robot. 4(1), 1104 (2015).CrossRefGoogle Scholar
Zhang, Z., “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vis. 13(2), 119152 (1994).CrossRefGoogle Scholar
He, Y., Liang, B., Yang, J., Li, S. and He, J., “An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features,” Sensors 17(8), 1862 (2017).Google Scholar