Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-26T07:55:53.541Z Has data issue: false hasContentIssue false

A one-step calibration method without redundant parameters for a laser stripe sensor

Published online by Cambridge University Press:  08 January 2024

Yang Mao
Affiliation:
School of Mechanical and Automation Engineering, Shanghai Institute of Technology, Shanghai, China
Yu He
Affiliation:
School of Mechanical and Automation Engineering, Shanghai Institute of Technology, Shanghai, China
Chengyi Yu
Affiliation:
Shanghai Satellite Equipment Research Institute, Shanghai, China
Honghui Zhang
Affiliation:
Shanghai Platform for Smart Manufacturing, Shanghai, China
Ke Zhang*
Affiliation:
School of Mechanical and Automation Engineering, Shanghai Institute of Technology, Shanghai, China
Xiaojun Sun
Affiliation:
Shanghai Waigaoqiao Shipbuilding Co., Ltd., Shanghai, China
*
Corresponding author: Ke Zhang; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

A laser stripe sensor has two kinds of calibration methods. One is based on the homography model between the laser stripe plane and the image plane, which is called the one-step calibration method. The other is based on the simple triangular method, which is named as the two-step calibration method. However, the geometrical meaning of each element in the one-step calibration method is not clear as that in the two-step calibration method. A novel mathematical derivation is presented to reveal the geometrical meaning of each parameter in the one-step calibration method, and then the comparative study of the one-step calibration method and the two-step calibration method is completed and the intrinsic relationship is derived. What is more, a one-step calibration method is proposed with 7 independent parameters rather than 11 independent parameters. Experiments are conducted to verify the accuracy and robust of the proposed calibration method.

Type
Research Article
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Introduction

In recent decades, robotic vision has been increasing used in many industry branches, such as a seam tracking system (Zhang et al., Reference Zhang, Wu and Zou2009), a weld quality inspecting system (Huang and Kovacevic, Reference Huang and Kovacevic2011; Jia et al., Reference Jia, Li, Ren, Wang and Yang2019; Yang et al., Reference Yang, Liu and Peng2020), and an inspection system of automobile assembly (Baeg et al., Reference Baeg, Baeg, Moon, Jeong, Ahn and Kim2008). The laser stripe sensor has been gaining the widest acceptance in industry inspection due to fast acquisition time, very simple structure, low cost, and its robust nature. The inspection principle of a laser stripe sensor is that a laser projector projects a light stripe plane onto the object's surface, forming a light stripe. The light stripe is modulated by the depth of the object's surface. And consequently, the distorted light stripe contains rich 3D characteristic information of the object surface. The 3D characteristic information of the object's surface can be derived from 2D distorted images after 3D reconstruction. To finish 3D reconstruction, on the one hand, intrinsic parameters of the camera should be established; on the other hand, the pose of the laser projector with respect to the camera also must be determined.

As we know, there has been two major ways to finish the laser stripe sensor calibration process. One is based on the homography model (Huynh et al., Reference Huynh, Owens and Hartmann1999) between the light stripe plane and the image plane, which is called the one-step calibration method, and the other is based on the simple triangular method (Gan and Tang, Reference Gan and Tang2011), which is named as two-step calibration method. The one-step calibration method is to determine a 4 × 3 transformation matrix from the image coordinate system to the camera coordinate system, while the two-step calibration method is to determine a 4 × 4 transformation matrix which can be explained as the intersection of a line and a plane. Tremendous efforts have been devoted to the field of calibrating the structured light stripe sensor and a wide range of methods have been developed.

One-step calibration method

The one-step calibration method is to estimate the homography model between the light stripe plane and the image plane. The method is mainly based on the perspective, translation, and rotation transforms in homogeneous coordinate system. Different calibration methods are proposed and they are distinguished based on the form of calibration target and the method to extract calibration points (control points) for structured light stripe sensor calibration (Gan and Tang, Reference Gan and Tang2011). Such as Dewar (Reference Dewar1988) proposes a strained thread method to obtain several control points and Duan (Reference Duan2000) uses a Zigzag-like calibration target instead of non-coplanar multiple thin threads. However, the shortcomings of the above two method are that the number of generated control points is limited and the accuracy of the control points is usually very poor. Xu et al. (Reference Xu, Liu, Zeng and Shi1995) and Huynh et al. (Reference Huynh, Owens and Hartmann1999) separately present a invariance of cross-ratio based method to generate control points in the perspective transformation. A 3D calibration target is used in both of two methods, but the form of the 3D calibration target is different. The 3D calibration target is very difficult to be manufactured accurately, and it is also very hard to capture the good enough image via viewing the different planes of the 3D calibration target at once. Meanwhile, the number of control points is limited. To overcome the shortcomings of the invariance of cross-ratio based method, Wei et al. (Reference Wei, Zhang and Xu2003) proposed an invariance of double cross-ratio based approach to estimate an arbitrary number of control points. There are also some methods (Forest Collado, Reference Forest Collado2004; Niola et al., Reference Niola, Rossi, Savino and Strano2011) that the 3D calibration target is constituted by a 2D calibration target and a movable platform. However, it is very inconvenient for operation and time-consuming in calibration process. Recently, several simple laser vision sensor calibration method as proposed which is based on a one-step calibration method (Abu-Nabah et al., Reference Abu-Nabah, Elsoussi and Alami2016; Yi and Min, Reference Yi and Min2021). Abu-Nabah et al. (Reference Abu-Nabah, Elsoussi and Alami2016) used a rectangular notch calibration blocks while Yi and Min (Reference Yi and Min2021) used a three-dimensional calibration block.

Two-step calibration method

The two-step calibration method is that camera is calibrated first and then the projector is calibrated, and 3D reconstruction is completed based on the simple triangular method. The camera has been extensively studied in the past decades, and its modeling and calibration techniques are very mature (Tsai, Reference Tsai1987; Heikkila and Silven, Reference Heikkila and Silven1997; Zhang, Reference Zhang2000). The equation of a plane is used to accurately represent the light stripe plane illuminated from the projector, and consequently, at least three known non-collinear control points on the light stripe plane are needed to carry out projector calibration. Similar to the one-step calibration method, an invariance of cross-ratio (Zhou et al., Reference Zhou, Zhang and Jiang2005) or double cross-ratio (Wei et al., Reference Wei, Zhang and Xu2003) based approach is also employed to generate control points in two-step calibration method. Mao et al. (Reference Mao, Zeng, Jiang and Yu2018) proposed a plane-constraint based calibration approach to effective and highly accurate extraction of much more control points for a structured light stripe vision sensor. There are also some methods that the 3D calibration target is constituted by a 2D calibration target and a movable platform (Li et al., Reference Li, Zhu, Duan, Tang, Wang, Guo and Lin2008; Luo et al., Reference Luo, Xu, Binh, Liu, Zhang and Chen2014; Xie et al., Reference Xie, Wang and Chi2014). Yu et al. (Reference Yu, Chen and Xi2017) proposed a novel mathematical model for a galvanometric laser scanner which is based on the two-step calibration model. Irandoust et al. (Reference Irandoust, Emam and Ansari2022) investigated the effect of the camera linear movement and the laser rotational movement on the measurement accuracy improvement of low resolution/cost laser triangulation scanner, which is also based on the two-step calibration model. It is very inconvenient for operation and time-consuming in calibration process as aforementioned.

In summary, tremendous efforts have been devoted to find the effective and highly accurate extraction of control points for calibrating a structured light stripe vision sensor. However, the intrinsic relationship between the above two calibration methods has not been revealed yet and the geometrical meaning of each element in the one-step calibration method has not been discussed yet. In the paper, a novel mathematical derivation is proposed to discuss geometrical meaning of each element in the one-step calibration method via redefining the 3D laser coordinate system. What is more, the intrinsic relationship between the one-step calibration method and the two-step calibration method is revealed theoretically. The remaining sections are organized as follows. Section “Mathematical models of two calibration methods” introduces the mathematical model of the two calibration methods. In Section “Comparison between two calibration methods”, the comparative study between two calibration methods is introduced. The 3D laser coordinate system in the one-step calibration method is redefined to reveal the geometrical meaning of each element in the homography matrix theoretically. And then the intrinsic relationship between two calibration methods is established. To validate the accuracy of their comparison, experiments are conducted in “Experiments” and the paper ends with concluding remarks in Section “Conclusion”.

Mathematical models of two calibration methods

One-step calibration method

The one-step calibration method is based on the homography model between the light stripe plane and the image plane. The invariance of cross-ratio (Huynh et al., Reference Huynh, Owens and Hartmann1999; Duan, Reference Duan2000) or double cross-ratio (Wei et al., Reference Wei, Zhang and Xu2003) is used to extract control points. Finally, the extracted control points and the corresponding image points are employed to estimate parameters in the homography model.

Homography is a 3 × 3 transformation matrix which represents the geometrical relationship between two planes in projective space. Homography model (Chen and Kak, Reference Chen and Kak1987; Forest Collado, Reference Forest Collado2004; Niola et al., Reference Niola, Rossi, Savino and Strano2011) is used to model the transformation relationship between the image plane and the 3D world coordinate system (WCS) via adding some transformation relationship. The geometric scheme of homography model is shown in Figure 1. Here, the camera coordinate system (CCS) is substituted for the WCS in order to obtain 3D coordinate data with respect to the camera coordinate system. {C} is the camera coordinate system, {L} is the 3D laser coordinate system, and {I} is the image coordinate system (ICS). {L2} is a bi-dimensional coordinate system which locates at the light stripe plane and its x and y axis coincide with the x- and y-axis of {L}. The homography model which relates the image plane and the 3D world coordinate system can be derived as follows:

(1)$$\tilde{P}_c = \left[{\matrix{ {x_c} \cr {y_c} \cr {z_c} \cr 1 \cr } } \right] = \mu {}^CT_L{}^LT_{L2}{}^{L2}T_I\cdot \left[{\matrix{ u \cr v \cr 1 \cr } } \right] = {}^CT_I\cdot \left[{\matrix{ u \cr v \cr 1 \cr } } \right], \;$$
(2)$${}^CT_L = \left[{\matrix{ {R_{3 \times 3}} & {T_{3 \times 1}} \cr {0_{1 \times 3}} & 1 \cr } } \right], \;$$
(3)$${}^LT_{L2} = \left[{\matrix{ 1 & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 0 \cr 0 & 0 & 1 \cr } } \right], \;$$
(4)$${}^{L2}T_I = \left[{\matrix{ {r_1} & {r_2} & {r_3} \cr {r_4} & {r_5} & {r_6} \cr {r_7} & {r_8} & {r_9} \cr } } \right], \;$$

where μ is an arbitrary scale factor. $\widetilde{{P_c}} = \left[{\matrix{ {x_c} & {y_c} & {\matrix{ {z_c} & 1 \cr } } \cr } } \right]^T$ is homogeneous coordinate of the point in camera coordinate system. ${}_{\rm \;}^C T_L$ represents the pose of {L} with respect to {C}. ${}_{\rm \;}^C T_I$ is a 4 × 3 transformation matrix with 11 degrees of freedom.

Figure 1. Geometric scheme of homography model.

At least four non-collinear known control points and their corresponding image points are required to estimate parameters in ${}_\;^C T_I.$ Equation (1) can be reorganized as follows:

(5)$$\mu \left[{\matrix{ {x_{\rm c}} \cr {y_c} \cr {z_c} \cr 1 \cr } } \right] = \left[{\matrix{ {t_1} & {t_2} & {t_3} \cr {t_4} & {t_5} & {t_6} \cr {t_7} & {t_8} & {t_9} \cr {t_{10}} & {t_{11}} & {t_{12}} \cr } } \right]\cdot \left[{\matrix{ u \cr v \cr 1 \cr } } \right], \;$$

The arbitrary scale factor μ can be eliminated from Eq. (5):

(6)$$\left\{{\matrix{ {t_1u + t_2v + t_3-x_{\rm c}( ut_{10} + vt_{11} + t_{12}) = 0}, \cr {t_4u + t_5v + t_6-y_c( ut_{10} + vt_{11} + t_{12}) = 0}, \cr {t_7u + t_8v + t_9-z_c( ut_{10} + vt_{11} + t_{12}) = 0}. \cr } } \right.$$

Equation (6) indicates that one single control points correspondence contributes with three linearly independent equations. Hence, at least four non-collinear control points are required to estimate the 11 independent parameters.

Equation (6) can be written as the form

(7)$${\bf AX}\,{\bf = }\,{\bf 0},$$

where

$$\! {\rm A} = \left[{\matrix{ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \cr {u_i} & {v_i} & 1 & 0 & 0 & 0 & 0 & 0 & 0 & {-u_i\cdot x_{{\rm c}i}} & {-v_i\cdot x_{ci}} & {-x_{ci}} \cr 0 & 0 & 0 & {u_i} & {v_i} & 1 & 0 & 0 & 0 & {-u_i\cdot y_{ci}} & {-v_i\cdot y_{ci}} & {-y_{ci}} \cr 0 & 0 & 0 & 0 & 0 & 0 & {u_i} & {v_i} & 1 & {-u_i\cdot z_{ci}} & {-v_i\cdot z_{ci}} & {-z_{ci}} \cr \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \cr } } \right],$$
$$X = \left[{\matrix{ {t_1} & {t_2} & {t_3} & {t_4} & {t_5} & {t_6} & {t_7} & {t_8} & {t_9} & {t_{10}} & {t_{11}} & {t_{12}} \cr } } \right]^T.$$

Equation (7) can be solved by computing the null space of the matrix A. Here, t 9 is chosen as 1 for the convenience of comparison with the two-step calibration method. It is obvious that the homography model of the structured light stripe sensor do not consider neither radial nor tangential lens distortion in the camera model.

Two-step calibration method

The two-step calibration method is to calibrate the camera first and then calibrate the projector since a laser stripe sensor is composed of a camera and a projector. The camera model and the projector model are explained in the research in detail, so only essential equations for the comparative study of the two calibration methods are listed below.

Here, 3D reconstruction is derived using the intersection of a line and a plane instead of the triangular-based method. The perspective projection model of a camera is shown in Figure 2. The transformation from the camera coordinate frame to the normalized image plane frame is rewritten as follows:

(8)$$\mu \tilde{P}_I = {}^IT_C\tilde{P}_c = \left[{\matrix{ {\,f_u} & 0 & {u_0} & 0 \cr 0 & {\,f_v} & {v_0} & 0 \cr 0 & 0 & 1 & 0 \cr } } \right]\tilde{P}_c, \;$$

where $\widetilde{{P_c}} = \left[{\matrix{ {x_c} & {y_c} & {\matrix{ {z_c} & 1 \cr } } \cr } } \right]^T$ is homogeneous coordinate of P c in the camera coordinate system. $\widetilde{{P_I}} = \left[{\matrix{ u & v & 1 \cr } } \right]^T$ is homogeneous coordinate corresponding to P c in the image plane coordinate system. f u and f v represent the horizontal and vertical focal length, respectively, and u 0 and v 0 are the coordinates of the principle point with respect to the image plane coordinate frame.

Figure 2. Perspective projection model of a camera.

The column rank of ${}_\;^I T_C$ is deficient, so there is no left inverse matrix of ${}_\;^I T_C$. ${}_\;^I T_C$ which represents the pinhole model of the camera can be considered as a 3D spatial line in the camera coordinate system. What is more, the point P c is laid on the light stripe plane, so an additional equation can be given below:

(9)$$ax_c + by_c + cz_c + d = 0, \;$$

where $\left({\matrix{ a & {\matrix{ b & {\matrix{ c & d \cr } } \cr } } \cr } } \right)$ are the estimated plane parameters and $\left({\matrix{ a & {\matrix{ b & c \cr } } \cr } } \right)$ are the normal vector of the plane. The point P c $\left({\matrix{ {x_c} & {\matrix{ {y_c} & {z_c} \cr } } \cr } } \right)$ is expressed in the camera coordinate system.

Equations (8) and (9) are combined as follow:

(10)$$\mu \tilde{P}_{I0} = \mu \left[{\matrix{ u \cr v \cr 1 \cr 0 \cr } } \right] = {}^{I0}{\rm T}_C\tilde{P}_c = \left[{\matrix{ {\,f_u} & 0 & {u_0} & 0 \cr 0 & {\,f_v} & {v_0} & 0 \cr 0 & 0 & 1 & 0 \cr a & b & c & d \cr } } \right]\left[{\matrix{ {x_c} \cr {y_c} \cr {z_c} \cr 1 \cr } } \right], \;$$

where ${}_{\rm \;}^{I0} T_C$ is an invertible matrix due to the structure of the laser stripe sensor, and P c is very easy to calculate. The first two rows in matrix ${}_{\rm \;}^{I0} T_C$ describe a 3D spatial line in the camera coordinate system and the last row represents a 3D plane in the same coordinate system. The intersection of the spatial line and plane is derived from Eq. (10). Compared with Eq. (1) and Eq. (10), the geometrical meaning of each element in the one-step calibration method is not clear as that is the two-step calibration method.

Comparison between two calibration methods

Geometrical insight into the one-step calibration method

Though parameters in homography model can be easily estimated, the geometrical meaning of each element is not clear as indicated in Eq. (10). Here, a transformation of geometric scheme for homography model is proposed as shown in Figure 3. Compared with geometric scheme in Figure 1, the definition of {L} and {L2} is more specific to reflect the structure of the laser stripe sensor. The original of the {L} is the projection of the original of the {C} onto the light stripe plane. The x-axis of the {L} is defined that the intersection of the z-axis of the {C} and the light stripe plane lies in the x-axis of the {L}. The z-axis of the {L} is the norm vector of the light stripe plane. The rest coordinate systems are unchanged as shown in Figure 1.

Figure 3. A transformation of geometric scheme for homography model.

Assuming that the equation of the light stripe plane is given in Eq. (9). The coordinate of the projection of the original of the {C} onto the light stripe plane is given below:

(11)$$O_{{ L} } = \left[{\matrix{ {-ad} & {-bd} & {-cd} \cr } } \right]^T.$$

The intersection of the z-axis of the {C} and the light stripe plane is given as follow:

(12)$$P_Z = \left[{\matrix{ 0 & 0 & {-{d \over c}} \cr } } \right]^T.$$

The direction vector of the x-axis of the {L} is expressed below:

(13)$$X_{{ L } } = \left[{\matrix{\displaystyle {{a \over M}} & \displaystyle {{b \over M}} & \displaystyle {{{( c^2-1) } \over {cM}}} \cr } } \right]^T, \;$$
(14)$$M = \sqrt {a^2 + b^2 + {( c-{1 / c}) }^2} .$$

The direction vector of the z-axis of the {L} is given below:

(15)$$Z_{{ L } } = \left[{\matrix{ a & b & c \cr } } \right]^T.$$

The pose of {L} with respect to {C} is described as follow:

(16)$${}^CT_L = \left[{\matrix{ {{a / M}} & {-{b / {\sqrt {a^2 + b^2} }}} & a & {-ad} \cr {{b / M}} & {{a / {\sqrt {a^2 + b^2} }}} & b & {-bd} \cr {{{( c^2-1) } / {cM}}} & 0 & c & {-cd} \cr 0 & 0 & 0 & 1 \cr } } \right].$$

The homography matrix ${}_\;^{L2} T_I$ in Eq. (1) is an invertible matrix and is calculated via computing the invertible matrix ${}_\;^I T_{L2}$. ${}_\;^I T_{L2}$ can be decomposed as follow:

(17)$${}^IT_{L2} = {}^IT_C\cdot {}^CT_L\cdot {}^LT_{L2}, \;$$

where ${}_\;^I T_{L2}$ represents the pose of {L2} with respect to {I}, ${}_\;^I T_C$ represents the pose of {C} with respect to {I}, ${}_\;^C T_L$ represents the pose of {L} with respect to {C}, and ${}_\;^L T_{L2}$ represents the pose of {L2} with respect to {L}.

The reason why ${}_\;^I T_{L2}$ is chosen to calculate the homography matrix ${}_\;^{L2} T_I$ is that ${}_\;^I T_C$ is given in Eq. (5) and ${}_\;^L T_{L2}$ does not lose any information (${}_\;^{L2} T_L$ will lose information of the z-axis of {L}). Equations (1), (3), (5), (16), and (17) are combined to derive the following equation:

(18)$$\eqalign{{}^CT_I & = {\rm inv}( {}^IT_{L2}{}^{L2}T_L{}^LT_C) \cr & = \left[{\matrix{ {{1 / {\,f_u}}} & 0 & {-{{u_0} / {\,f_u}}} \cr 0 & {{1 / {\,f_v}}} & {-{{v_0} / {\,f_v}}} \cr 0 & 0 & 1 \cr {-\displaystyle{a \over {\,f_u\cdot d}}} & {-\displaystyle{b \over {\,f_v\cdot d}}} & {\displaystyle{{au_0} \over {\,f_u\cdot d}} + \displaystyle{{bv_0} \over {\,f_v\cdot d}}-\displaystyle{c \over d}} \cr } } \right].}$$

Equation (1) can be rewritten as follow:

(19)$$\eqalign{\tilde{P}_c & = \left[{\matrix{ {x_{ c}} \cr {y_c} \cr {z_c} \cr 1 \cr } } \right] = \mu {}^CT_I\cdot \left[{\matrix{ u \cr v \cr 1 \cr } } \right] \cr & = \mu \left[{\matrix{ {{1 / {\,f_u}}} & 0 & {-{{u_0} / {\,f_u}}} \cr 0 & {{1 / {\,f_v}}} & {-{{v_0} / {\,f_v}}} \cr 0 & 0 & 1 \cr {-\displaystyle{a \over {\,f_u\cdot d}}} & {-\displaystyle{b \over {\,f_v\cdot d}}} & {\displaystyle{{au_0} \over {\,f_u\cdot d}} + \displaystyle{{bv_0} \over {\,f_v\cdot d}}-\displaystyle{c \over d}} \cr } } \right]\tilde{P}_I.}$$

Both sides of Eq. (19) are premultiplied by ${}_\;^{I0} T_C$ indicated in Eq. (10):

(20)$$\eqalign{{}^{I0}T_c\tilde{P}_c & = \mu {}^{I0}T_c{}^CT_I\cdot \left[{\matrix{ u \cr v \cr 1 \cr } } \right] = \mu \left[{\matrix{ 1 & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 \cr 0 & 0 & 0 \cr } } \right]\tilde{P}_I = \left[{\matrix{ u \cr v \cr 1 \cr 0 \cr } } \right] \cr & = \mu \tilde{P}_{I0}.}$$

As shown in Eq. (19), the geometrical meaning of each element in the one-step calibration method is as clear as that in the two-step calibration method. There are only 7 rather than 11 degrees of freedom in the one-step calibration method. Compare Eq. (20) with Eq. (10), the two calibration methods are identical essentially without considering lens distortion.

The proposed one-step calibration method

As we know in the literature of the sensitivity matrix, if the matrix A in Eq. (7) has full rank but a large condition number (a feature known as ill-condition), the identified parameters are greatly affected by the inevitable noise (measurement noise and model errors). From aforementioned analysis, there are four redundant parameters in the traditional one-step calibration method if CCS is substituted for WCS. The redundant parameters will increase the condition number of the matrix A and compromise the robustness of the identified parameters. And consequently, a proposed one-step calibration method with seven independent parameters is proposed.

Combined Eqs (5), (6), (7), and (19), a one-step calibration method without redundant parameters is presented via rewriting Eq. (7):

(21)$$\eqalign{ & \left[{\matrix{ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \cr {u_i} & 1 & 0 & 0 & {-u_i\cdot x_{{\rm c}i}} & {-v_i\cdot x_{ci}} & {-x_{ci}} \cr 0 & 0 & {v_i} & 1 & {-u_i\cdot y_{ci}} & {-v_i\cdot y_{ci}} & {-y_{ci}} \cr 0 & 0 & 0 & 0 & {-u_i\cdot z_{ci}} & {-v_i\cdot z_{ci}} & {-z_{ci}} \cr \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \cr } } \right]\left[{\matrix{ {t_1} \cr {t_3} \cr {t_5} \cr {t_6} \cr {t_{10}} \cr {t_{11}} \cr {t_{12}} \cr } } \right] \cr & \quad = \left[{\matrix{ \vdots \cr 0 \cr 0 \cr {-1} \cr \vdots \cr } } \right],} \;$$

where t i has the same meaning as indicated in Eq. (7).

It can be noticed that at least three non-collinear control points with respect to CCS are required to estimate the seven independent parameters. The two-step calibration method has four independent intrinsic parameters and three laser stripe plane parameters which equals to the number of independent parameters in the proposed one-step calibration method. What is more, at least three non-collinear control points are required to calibration the laser projector.

Error analysis

Theoretically, $^{{ I}0} { H}_{ I}$ is a 4 × 3 matrix as shown in Eq. (20). However, the one-step calibration method simplifies the camera as a pin hole model without considering lens distortion, so the focal length and the coordinates of the principle point are inevitable different from that in the two-calibration method. The matrix $^{{I}0} {H}_{ I}$ is given below considering the inevitable errors:

(22)$$\eqalign{{}^{I0}H_I & = \left[{\matrix{ {\,f_u + \Delta f_u} & 0 & {u_0 + \Delta u_0} & 0 \cr 0 & {\,f_v + \Delta f_v} & {v_0 + \Delta v_0} & 0 \cr 0 & 0 & 1 & 0 \cr a & b & c & d \cr } } \right] \cr & \times \left[{\matrix{ {{1 / {\,f_u}}} & 0 & {-{{u_0} / {\,f_u}}} \cr 0 & {{1 / {\,f_v}}} & {-{{v_0} / {\,f_v}}} \cr 0 & 0 & 1 \cr {-\displaystyle{a \over {\,f_u\cdot d}}} & {-\displaystyle{b \over {\,f_v\cdot d}}} & {\displaystyle{{au_0} \over {\,f_u\cdot d}} + \displaystyle{{bv_0} \over {\,f_v\cdot d}}-\displaystyle{c \over d}} \cr } } \right],} \;$$

where Δf u and Δf v represent errors of horizontal and vertical focal length, respectively, and Δu 0 and Δv 0 are the coordinate errors of the principle point.

Here, ${}_\;^{I0} H_I\left({\matrix{ i & j \cr } } \right)$ represents the element at the i row and the j column in the matrix $^{{I}0} { H}_{ I}$.

(23)$${}^{I0}H_I\left({\matrix{ 1 & 1 \cr } } \right) = ( {\,f_u + \Delta f_u} ) \displaystyle{1 \over {\,f_u}} = 1 + \displaystyle{{\Delta f_u} \over {\,f_u}}, \;$$
(24)$$\eqalign{{}^{I0}H_I\left({\matrix{ 1 & 3 \cr } } \right) & = ( {\,f_u + \Delta f_u} ) \displaystyle{{-u_0} \over {\,f_u}} + u_0 + \Delta u_0 \cr & = \Delta u_0-\Delta f_u\displaystyle{{u_0} \over {\,f_u}},}\;$$
(25)$${}^{I0}H_I\left({\matrix{ 2 & 2 \cr } } \right) = ( {\,f_v + \Delta f_v} ) \displaystyle{1 \over {\,f_v}} = 1 + \displaystyle{{\Delta f_v} \over {\,f_v}}, \;$$
(26)$${}^{I0}H_I\left({\matrix{ 2 & 3 \cr } } \right) = ( {\,f_v + \Delta f_v} ) \displaystyle{{-v_0} \over {\,f_v}} + v_0 + \Delta v_0 = \Delta v_0-\Delta f_v\displaystyle{{v_0} \over {\,f_v}}, \;$$
(27)$$\eqalign{{}^{I0}H_I\left({\matrix{ 1 & 2 \cr } } \right) & = {}^{I0}H_I\left({\matrix{ 2 & 1 \cr } } \right) = {}^{I0}H_I\left({\matrix{ 3 & 1 \cr } } \right) = {}^{I0}H_I\left({\matrix{ 3 & 2 \cr } } \right) \cr & = {}^{I0}H_I\left({\matrix{ 4 & 1 \cr } } \right) = {}^{I0}H_I\left({\matrix{ 4 & 2 \cr } } \right) = {}^{I0}H_I\left({\matrix{ 4 & 3 \cr } } \right) \cr & = 0,} \;$$
(28)$${}^{I0}H_I\left({\matrix{ 3 & 3 \cr } } \right) = 1.$$

The element ${}_{\rm \;}^{I0} H_I\left({\matrix{ 1 & 1 \cr } } \right)$ and element ${}_{\rm \;}^{I0} H_I\left({\matrix{ 2 & 2 \cr } } \right)$ represent margin of errors of horizontal and vertical focal length, respectively, and the element ${}_{\rm \;}^{I0} H_I\left({\matrix{ 1 & 3 \cr } } \right)$ and element ${}_{\rm \;}^{I0} H_I\left({\matrix{ 2 & 3 \cr } } \right)$ represent errors in pixels. Consequently, errors of element ${}_{\rm \;}^{I0} H_I\left({\matrix{ 1 & 3 \cr } } \right)$ and element ${}_{\rm \;}^{I0} H_I\left({\matrix{ 2 & 3 \cr } } \right)$ are much larger than errors of element ${}_{\rm \;}^{I0} H_I\left({\matrix{ 1 & 1 \cr } } \right)$ and element ${}_{\rm \;}^{I0} H_I\left({\matrix{ 2 & 2 \cr } } \right)$.

Experiments

Experiment setup

Here, a rotational laser stripe sensor is designed using a one-mirror galvanometer element as the mechanical device as shown in Figure 4. What is more, a video of the laser sensor can be found in the paper (Yu et al., Reference Yu, Chen and Xi2017). The rotational laser scanner is mainly comprised of a CCD camera (Basler acA1300/30um, sensor size 4.86 mm × 3.62 mm, resolution 1296 px × 966 px) with 16 mm lens, a laser line projector (wavelength is 730 nm, line width ≤1 mm), and a one-mirror galvanometer element. To validate comparative accuracy of the two calibration methods, the galvanometer element rotates at 23 different angles to construct 23 laser stripe sensors which are different from the pose of the light stripe plane with respect to CCS.

Figure 4. Schematic diagram of the rotational laser stripe sensor.

Calibration procedure

Theoretically, at least four non-collinear control points with respect to CCS are required to complete the aforementioned three calibration methods. The planar calibration target (7 × 7 dot array) is located at 20 different poses and the last two poses are captured twice in two cases, one with no laser stripe line projected onto the planar for camera calibration and the other with a laser stripe line for extracting control points. There are 20 images for camera calibration and 2 images for extracting control points.

Figure 5 shows the last two images of the planar calibration target in camera calibration procedure. The coordinate (i,j) in Figure 5 indicates that the center of the circle locates at the ith row and the jth column. The distance between the adjacent two circles is 3.75 mm in both the horizontal and the vertical directions. The procedure of determining the coordinate is modeled as Single Source Shortest Paths (SSSPs) and solved via Bellman-Ford algorithm. Figure 6 shows the two captured images with 23 strip lines lying on the planar target. Figure 6(a) shows the 23 strip lines lying on the 19th pose of the planar target and Figure 6(b) shows the 23 strip lines lying on the 20th pose of the planar target. The poses of the last two planar targets are known after camera calibration, and control points lay on 23 laser stripe planes can be calculated via the plane-constraint based method. The extracted control points are used to carry out aforementioned three calibration methods.

Figure 5. The last two images of the planar target. (a) The 19th planar target. (b) The 20th planar target.

Figure 6. Stripe lines lying on the last two planar targets. (a) Laser stripes lying on the 19th planar target. (b) Laser stripes lying on the 20th planar target.

Experiment results

  1. (1) Robustness analysis of two models

The traditional one-step calibration method is compared with the proposed one-step calibration method with respect to the condition number of the matrix A and stability of the identified parameters related to the structure of the camera. The ratio of the condition number is used to verify the redundancy of the model via numerical analysis (Joubair and Bonev, Reference Joubair and Bonev2014). Figure 7 shows the comparison of the matrix A's condition number, and the condition number of the traditional one-step calibration method is at least several hundred times larger than that of the proposed one-step calibration method.

  1. (2) Errors analysis of identified parameters

Figure 7. Comparison of condition number of two one-step calibration methods.

The one-step calibration method simplifies the camera as pin hole model without considering lens distortion, so the focal length and the coordinates of the principle point are inevitable different in two calibration methods. Comparison of errors of each element in the matrix ${}_\;^{I0} H_I$ between two one-step calibration methods are shown in Figure 8. Elements in the matrix ${}_\;^{I0} H_I$ using two-step calibration method are used as nominal values. Errors of element ${}_\;^{I0} H_I( {1, \;3} )$ and element ${}_\;^{I0} H_I( {2, \;3} )$ are much larger than those of other elements. As to the aforementioned CCD camera, the nominal coordinates of the principle point are 648 px(u 0) and 483 px(v 0), and both of the horizontal (f u) and vertical (f v) focal length are 4267 px. As indicated in Eq. (22) and Figure 8, error of the element ${}_\;^{I0} H_I( {1, \;1} )$ is less than 5 ×10–4, so error of the element ${}_\;^{I0} H_I( {1, \;3} )$ mainly depends on Δu 0. It is the same with error of element of ${}_\;^{I0} H_I( {2, \;3} )$. Errors of element ${}_\;^{I0} H_I( {1, \;1} )$ and element ${}_\;^{I0} H_I( {2, \;2} )$ are very small, because the focal length is very large (about 4267 px). Error of element of ${}_\;^{I0} H_I( {3, \;3} )$ is always zero, because t 9 is chosen as 1 for estimating 11 independent parameters in the one-step calibration method described in Section “One-step calibration method”. Errors of the rest elements is non-zero but much smaller than others.

  1. (3) Accuracy evaluation

Figure 8. Errors of each element in the matrix ${}_\;^{I0} H_I$.

Here, accuracy of the proposed one-step calibration method is evaluated via measuring a standard sphere from six different poses as shown in Figure 9. The standard sphere is a bearing steel ball and is coated with matt material and is measured on a CMM (Thome, 2 + (L/350) μm). True values of the sphere radius is 12.7080 mm and its standard deviation is 12.1 μm. The proposed calibration method is compared with other two calibration models, such as a traditional one-step calibration model and a two-step calibration model. However, there are 23 stripe lines in the view of the sensor, and each pose of the stripe line is fixed in the CCS owing to the high repeatability of the stepper motor. Inspired by LUT, three other calibration methods are repeated 23 times at each pose of the laser stripe plane to complete the calibration of the rotational laser scanner. Accuracy comparison among three methods is shown in Figure 10. It can be seen that accuracy of the proposed calibration method is nearly same as that of the two-step calibration method which is consistent with errors analysis of identified parameters as shown in Figure 8.

Figure 9. (a) The experimental setup. (b) The processed gray image.

Figure 10. Accuracy comparison among three methods.

Conclusion

In this paper, two kinds of calibration methods for a laser stripe sensor are compared. The geometrical meaning of each element in the one-step calibration method is not clear as that is the two-step calibration method, so a novel mathematical derivation is presented to reveal the geometrical meaning of each parameter in the one-step calibration method, and then the comparative study of the one-step calibration method and the two-step calibration method is completed and the intrinsic relationship is derived. Meanwhile, we found the one-step calibration method has 7 independent parameters rather than 11 independent parameters, then a one-step calibration method without redundant parameters is proposed. Finally, experiments are conducted to verify the accuracy of their comparison and the robust of the proposed one-step calibration method. What is more, the proposed one-step calibration method is suitable for a seam tracking system, because the laser stripe vision sensor of a seam tracking system needs a large depth of field and then the optical path is designed according to Scheimpflug theorem.

Future work will advance the proposed calibration method one more step to worker use by designing a three-dimensional calibration target and calibration procedure. Meanwhile, we also plan to investigate the effect of the lens distortion on the measurement accuracy of the proposed calibration method and provide a guidance to use the proposed calibration method.

Funding statement

This research was funded by Sponsored by Shanghai Rising-Star Program, grant number 21QA1408600 and key technologies of 5G application in segmentation workshop of ship assembly references CJ04N20.

Competing interests

The authors declare none.

Yang Mao received the Ph.D. degree from Wuhan University of Science and Technology, Hubei, China in 2012. She is currently a lecturer with the school of mechanical Engineering, Shanghai institute of technology, Shanghai, China. Her research interests include machine vision, on-line inspection, and so on.

Yu He is currently pursuing Master's degree at the School of Mechanical and Automation Engineering, Shanghai Institute of Technology, Shanghai, China. His research interests include machine vision and 3D reconstruction.

Chengyi Yu received his Ph.D. from Shanghai Jiaotong University, Shanghai, China in 2017. He is currently a senior engineer at the Shanghai Satellite Equipment Research Institute, Shanghai, China. His research interests include machine vision, online inspection, and robot calibration, among others. He was recognized as a recipient of the Shanghai Rising-Star Talent Plan.

Honghui Zhang received the B.S. degree from Shanghai Jiaotong University, Shanghai, China in 2016. He is currently a senior engineer at the Shanghai Platform for Smart Manufacturing, Shanghai, China. His research interests include machine vision.

Ke Zhang received his Ph.D. from Donghua University, Shanghai, China in 2005. He is currently a professor at the School of Mechanical and Automation Engineering, Shanghai Institute of Technology, Shanghai, China. His research interests include machine vision and online inspection.

Xiaojun Sun received his B.S. degree from the Dalian University of Technology, Dalian, China in 1999. He is currently a researcher-level senior engineer at Shanghai Waigaoqiao Shipbuilding Co. Ltd, Shanghai, China. His research interests include Information Technology and Intelligent Manufacturing.

References

Abu-Nabah, BA, Elsoussi, AO and Alami, AA (2016) Simple laser vision sensor calibration for surface profiling applications. Optics & Lasers in Engineering 84, 5161.CrossRefGoogle Scholar
Baeg, MH, Baeg, SH, Moon, C, Jeong, GM, Ahn, HS and Kim, DH (2008) A new robotic 3D inspection system of automotive screw hole. International Journal of Control Automation & Systems 6, 740745.Google Scholar
Chen, C and Kak, A (1987) Modeling and calibration of a structured light scanner for 3-D robot vision. Proceedings 1987 IEEE International Conference on Robotics and Automation, Vol. 4. IEEE, pp. 807815.CrossRefGoogle Scholar
Dewar, R (1988) Self-generated Targets for Spatial Calibration of Structured-Light Optical Sectioning Sensors with Respect to an External Coordinate System. Pittsburgh: Society of Manufacturing Engineers.Google Scholar
Duan, F (2000) A new accurate method for the calibration of line structured light sensor. Chinese Journal of Scientific Instrument 1, 108110.Google Scholar
Forest Collado, J (2004) New Methods for Triangulation-Based Shape Acquisition Using Laser Scanners. Catalonia Autonomous Region, Spain: Universitat de Girona.Google Scholar
Gan, Z and Tang, Q (2011) Visual Sensing and Its Applications: integration of Laser Sensors to Industrial Robots. Zhejiang, China: Zhejiang University Press, Springer.CrossRefGoogle Scholar
Heikkila, J and Silven, O (1997) A four-step camera calibration procedure with implicit image correction. IEEE Computer Society Conference on Computer Vision & Pattern Recognition, pp. 11061112.CrossRefGoogle Scholar
Huang, W and Kovacevic, R (2011) A laser-based vision system for weld quality inspection. Sensors (Basel) 11, 506521.CrossRefGoogle ScholarPubMed
Huynh, DQ, Owens, RA and Hartmann, P (1999) Calibrating a structured light stripe system: a novel approach. International Journal of Computer Vision 33, 7386.CrossRefGoogle Scholar
Irandoust, M, Emam, SM and Ansari, MA (2022) Measurement accuracy assessment of the 3D laser triangulation scanner based on the iso-disparity surfaces. Journal of the Brazilian Society of Mechanical Sciences and Engineering 44, 164176.CrossRefGoogle Scholar
Jia, NN, Li, ZY, Ren, JL, Wang, YJ and Yang, LQ (2019) A 3D reconstruction method based on grid laser and gray scale photo for visual inspection of welds. Optics and Laser Technology 119, 105648.CrossRefGoogle Scholar
Joubair, A and Bonev, IA (2014) Kinematic calibration of a six-axis serial robot using distance and sphere constraints. International Journal of Advanced Manufacturing Technology 77, 515523.CrossRefGoogle Scholar
Li, J, Zhu, J, Duan, K, Tang, Q, Wang, Y, Guo, Y and Lin, X (2008) Calibration of a portable laser 3-D scanner used by a robot and its use in measurement. Optical Engineering 47, 017202-017202-017208.CrossRefGoogle Scholar
Luo, HF, Xu, J, Binh, NH, Liu, ST, Zhang, C and Chen, K (2014) A simple calibration procedure for structured light system. Optics and Lasers in Engineering 57, 612.CrossRefGoogle Scholar
Mao, Y, Zeng, L, Jiang, J and Yu, C (2018) Plane-constraint-based calibration method for a galvanometric laser scanner. Advances in Mechanical Engineering 10, 1687814018773670.CrossRefGoogle Scholar
Niola, V, Rossi, C, Savino, S and Strano, S (2011) A method for the calibration of a 3-D laser scanner. Robotics and Computer-Integrated Manufacturing 27, 479484.CrossRefGoogle Scholar
Tsai, RY (1987) A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics & Automation 3, 323344.CrossRefGoogle Scholar
Wei, ZZ, Zhang, GJ and Xu, Y (2003) Calibration approach for structured-light-stripe vision sensor based on the invariance of double cross-ratio. Optical Engineering 42, 29562966.CrossRefGoogle Scholar
Xie, Z, Wang, X and Chi, S (2014) Simultaneous calibration of the intrinsic and extrinsic parameters of structured-light sensors. Optics and Lasers in Engineering 58, 918.CrossRefGoogle Scholar
Xu, G, Liu, L, Zeng, J and Shi, D (1995) A new method of calibration in 3D vision system based on structure-light. Chinese Journal of Computers 6, 450456.Google Scholar
Yang, L, Liu, YH and Peng, JZ (2020) Advances techniques of the structured light sensing in intelligent welding robots: a review. International Journal of Advanced Manufacturing Technology 110, 10271046.CrossRefGoogle Scholar
Yi, S and Min, S (2021) A practical calibration method for stripe laser imaging system. IEEE Transactions on Instrumentation and Measurement 110, 10271046.Google Scholar
Yu, CY, Chen, XB and Xi, JT (2017) Modeling and calibration of a novel one-mirror galvanometric laser scanner. Sensors 17, 164177.CrossRefGoogle ScholarPubMed
Zhang, Z (2000) A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis & Machine Intelligence 22, 13301334.CrossRefGoogle Scholar
Zhang, L, Wu, CY and Zou, YY (2009) An on-line visual seam tracking sensor system during laser beam welding. Itcs: 2009 International Conference on Information Technology and Computer Science, Proceedings, Vol 2, Proceedings, pp. 361364.CrossRefGoogle Scholar
Zhou, FQ, Zhang, GJ and Jiang, J (2005) Constructing feature points for calibrating a structured light vision sensor by viewing a plane from unknown orientations. Optics and Lasers in Engineering 43, 10561070.CrossRefGoogle Scholar
Figure 0

Figure 1. Geometric scheme of homography model.

Figure 1

Figure 2. Perspective projection model of a camera.

Figure 2

Figure 3. A transformation of geometric scheme for homography model.

Figure 3

Figure 4. Schematic diagram of the rotational laser stripe sensor.

Figure 4

Figure 5. The last two images of the planar target. (a) The 19th planar target. (b) The 20th planar target.

Figure 5

Figure 6. Stripe lines lying on the last two planar targets. (a) Laser stripes lying on the 19th planar target. (b) Laser stripes lying on the 20th planar target.

Figure 6

Figure 7. Comparison of condition number of two one-step calibration methods.

Figure 7

Figure 8. Errors of each element in the matrix ${}_\;^{I0} H_I$.

Figure 8

Figure 9. (a) The experimental setup. (b) The processed gray image.

Figure 9

Figure 10. Accuracy comparison among three methods.