Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-23T22:28:04.978Z Has data issue: false hasContentIssue false

Decoupling of the position and angular errors in laser pointing with a neural network method

Published online by Cambridge University Press:  08 September 2020

Lei Xia
Affiliation:
Institute for Advanced Study, Shenzhen University, Shenzhen 518060, China Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China
Yuanzhang Hu
Affiliation:
Institute for Advanced Study, Shenzhen University, Shenzhen 518060, China
Wenyu Chen
Affiliation:
Institute for Advanced Study, Shenzhen University, Shenzhen 518060, China
Xiaoguang Li*
Affiliation:
Institute for Advanced Study, Shenzhen University, Shenzhen 518060, China
*
Correspondence to: X. Li, Institute for Advanced Study, Shenzhen University, Shenzhen 518060, China. Email: [email protected]

Abstract

In laser-pointing-related applications, when only the centroid of a laser spot is considered, then the position and angular errors of the laser beam are often coupled together. In this study, the decoupling of the position and angular errors is achieved from one single spot image by utilizing a neural network technique. In particular, the successful application of the neural network technique relies on novel experimental procedures, including using an appropriate small-focal-length lens and tilting the detector, to physically enlarge the contrast of different spots. This technique, with the corresponding new system design, may prove to be instructive in the future design of laser-pointing-related systems.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s) 2020. Published by Cambridge University Press in association with Chinese Laser Press

1 Introduction

Accurate laser pointing is crucial for many applications such as free-space communication[Reference Yin, Ren and Lu1], fusion ignition[Reference Wilhelmsen, Awwal and Brunton2], high-power lasers[Reference Genoud, Wojda, Burza, Persson and Wahlström3] and robot manipulators[Reference Shirinzadeh, Teoh, Tian, Dalvand, Zhong and Liaw4]. The position and angular errors of a laser beam should therefore be accurately measured and synchronously adjusted. In measurements based on the centroidal position of a laser spot[Reference Beerer, Yoon and Agrawal5Reference Anderson, Blankinship, Fowler, Glaese and Janzen7] the two errors are often coupled together, which means that they cannot be determined with one single measurement. The pure angular error in many applications can actually be obtained with the detector located on the in-focus plane. In this case, however, the position error of the laser is totally sacrificed. For applications requiring both the position and angular errors, such as fine optical systems[Reference Moon, Lee and Cho8], laser resonator alignment[Reference Dawkins and Luiten9], laser beam drift control[Reference Zhao, Tan, Qiu, Zou, Cui and Z.10] and lithography[Reference Pan, Viatella, Das and Yamasaki11, Reference Lublin, Warkentin, Das, Ershov, Vipperman, Spangler and Klene12], the common decoupling method for these two errors involves making two measurements, with one measurement on the in-focus plane and the other on the out-of-focus plane. It can be implemented by repositioning detectors at different locations[Reference Dawkins and Luiten9], or splitting the beam into two paths[Reference Zhao, Tan, Qiu, Zou, Cui and Z.10Reference Merritt and Albertine14]. Since these methods only utilize information regarding spot centroids, long-focal-length lenses are required to improve the sensitivity of the spot centroid displacement. Optical measurement systems using these methods inevitably involve complex structures and a reduction in system reliability.

The artificial neural network technique can establish the connection between the input and the output of systems by learning from datasets, and has been used in many fields for function approximation and pattern recognition[Reference Hornic15, Reference LeCun, Bengio and Hinton16]. In particular, this technique has already been used in many different optical systems. In adaptive optics systems, neural networks have been applied to derive the distorted wavefront from a simultaneous pair of in-focus and out-of-focus images of a reference star[Reference Sandler, Barrett, Palmer, Fugate and Wild17Reference Wizinowich, Lloyd-Hart and McLeod19]. Breitling et al. have used neural networks to predict the angular deviation of a pulse laser from the final four sample positions[Reference Breitling, Weigel, Downer and Tajima20]. Guo et al. have utilized neural networks to reconstruct the wavefront of human eyes from the spot displacements from a Hartman–Shack sensor[Reference Guo, Korablinova, Ren and Bille21]. Abbasi et al. have adopted neural networks to obtain the position vector of a Gaussian beam for vibration analysis from four quad cell power distributions[Reference Abbasi, Landolsi and Dhaouadi22]. Yu et al. have employed neural networks to obtain the tilt, decenter and defocus of a laser diode fast-axis collimator from four parameters from the measured field distribution[Reference Yu, Rossi, Braglia and Perrone23].

In this study, a neural network is applied to extract full information from the intensity distribution of a laser spot, and the position and angular errors of a laser beam can be determined from a single spot image. The datasets for the neural network are obtained by the simulation of a prototype laser-pointing system with a special setup, including a tilted charge-coupled device (CCD) detector with known defocus distance and a small-focal-length lens. This setup is designed for obtaining spot images with more distinct features for neural network analysis, such as higher intensity contrast and the required spot size. Compared with traditional setups, the current system supplies a more compact structure and an alternative way to approach high measurement accuracy through data methods, so there may be some advantages in accuracy, reliability and synchronization in laser-pointing measurement.

2 Neural network method for laser-pointing error measurement

Our prototype laser-pointing system contains a laser source, a thin lens and a CCD detector, as shown in Figure 1. In the prototype system, for a beam tilt T the corresponding spot image M on the CCD is simulated through a virtual optical system method for a tilted beam[Reference Xia, Gao and Han24]. The distance u between the source plane and the lens is set to be equal to the focal length f, so that the image beam waist can be approximately located on the focal plane for its largest waist radius ${w}_{02}\approx \lambda f/\left(\pi {w}_{01}\right)$ (half width at 1/e 2 center intensity), and the spot radius w 2 on the CCD can then be expressed as ${w}_2\approx {w}_{02}\sqrt{1+{\left({\delta}_z/{Z}_{R2}\right)}^2}$ , where Z R2 is the Rayleigh length in the image space, w 01 is the waist radius at the source plane and λ is the wavelength of the laser source.

We chose a laser of the wavelength λ = 632.8 nm, and a beam waist radius w 01 = 2.0 mm with Gaussian distribution at the beam waist, corresponding to a typical transverse electromagnetic mode (TEM00) He-Ne laser source. The CCD had a pixel size of 0.0057 mm, with an output gray level of 12 bits, providing an intensity range 0–4095. The pixel intensities of an image are all integers to simulate analog-to-digital (A/D) conversion, which is equivalent to introducing a detection noise of less than 0.5. We limit the position offsets a 0 and b 0 within the range [−0.5, 0.5] mm, and the inclination angles θx and θy within [−25, 25] Ҽrad. To compare the prediction performance for image sets with different system parameters, the intensity of the collimated beam at the center of the CCD is fixed to a particular value by adjusting the intensity of the laser source. A dataset composed of 12,000 spot images of 36 × 36 pixel regions with the corresponding randomly generated beam tilts can then be obtained.

Figure 1 Prototype laser-pointing system. S is the laser source; L is the thin lens; M is the spot image on the CCD; T is the beam tilt of the waist center on the source plane; a 0, θx are the position offset and inclination angle of the beam relative to the optical axis in the x direction, respectively, and b 0, θy are those in the y direction; u is the distance between the source plane and the lens; f is the focal length; δz is the defocus distance of the CCD. The optical axis of the system is along the z direction.

The neural network used in this study is implemented by Python[Reference Nielsen25] without any specific package. It is a feed-forward network[Reference Wizinowich, Lloyd-Hart and McLeod19] with three layers: an input layer of 36 × 36 = 1296 nodes for normalized pixel intensities of an image, a hidden layer of 100 nodes and an output layer for the prediction of the normalized beam tilt. Samples from a dataset (10,000) are used to train the neural network for 2000 epochs with the back-propagation technique[Reference Rumelhart, Hinton and Williams26], and the remaining 2000 samples are used to test the performance of the neural network after each epoch of training. For the jth training epoch, the prediction error Ej of the neural network is evaluated as

(1) \begin{align}{E}_j=\frac{1}{N}\sum \limits_{n=0}^{N-1}\frac{1}{\sqrt{m}}\left\Vert {\mathbf{y}}_n-{\mathbf{a}}_n\right\Vert, \end{align}

where an is the nth output of the network, yn is the nth actual beam tilt (normalized), m is the dimension of vector yn and N is the number of test samples. Finally, the mean value of Ej in the last 500 epochs of a total of 2000 epochs is employed to represent the prediction performance E mean of the neural network.

We first start with a simpler case with the beams tilted only along one direction (e.g., here the x direction) to illustrate the effect of our method. We consider a vertical CCD and choose a focal length f = 100 mm and a defocus distance δz = Z R2 (instead of δz = 0 as in the traditional method), so that the position and angular errors can contribute nearly the same to the bound of the beam displacement on the CCD. The performance of the neural network is shown for this case in Figure 2(a). Clearly, the prediction error ultimately remains at a high level, about 0.33, implying that the network cannot well separate the position and angular errors from images on the vertical CCD. To determine the cause of the failure, we analyzed the difference between two spot images with the same centroid on the CCD. To achieve the maximum image difference we choose two beams with the tilts T1 = (−0.49646, −25)T and T2 = (0.49646, 25)T, and both with the spot centroid at the center of the CCD. The corresponding image difference (M2 − M1) is shown in Figure 2(b), where the maximum pixel intensity is only 0.00096, resulting in the failure of the error decoupling.

Figure 2 Prediction errors Ej for all epochs, spot image M2 and image difference for beam tilts in the x direction. (a) and (b) show the prediction errors, the spot image and image difference on the vertical CCD, respectively. (c) and (d) show those on the tilted CCD with a rotation of 60° around the y-axis. (e) and (f) show those on the tilted CCD with a rotation of 60° around the x-axis.

In order to enlarge the image difference, we designed the system with a tilted CCD rotated by 60° around either the y or x axis. Figures 2(c) and 2(e) show the elliptical spots of image M2 with tilt T2 on the CCD rotated around the y and x axes, respectively. For the same beam tilts T1 and T2, the corresponding image differences are largely improved to −65.3 and 22.8 as shown in Figures 2(d) and 2(f), respectively. The significant enhancement can be attributed mainly to the magnified difference in incident angles and the no-longer centro-symmetric intensity distribution for the tilted CCD. As shown in the lower-left corner of Figure 2(d), since the incident angle of beam 2 is obviously larger than that of beam 1, the pattern for beam 2 exhibits a broader distribution with a lower peak value on the tilted CCD. For the pixels in Figure 2(f), the patterns of both beams deviate from the centro-symmetric distribution, giving the observed quadrupole image difference. The prediction results of the neural network are consistent with the changes in the image differences. As shown in Figures 2(c) and 2(e), the prediction performances E mean with the image rotation around the y and x axes are 0.010 and 0.014, respectively. It can therefore be inferred that the tilted CCDs can help to decouple the position and angular errors, and the rotation perpendicular to the beam tilt direction is better.

For non-Gaussian beams, the maximum difference is expected to appear at different positions but with a similar magnitude. For flat-topped beams common in high-energy systems, the maximum difference would occur around the edges of the pattern with a similar magnitude; it is expected that this can be recognized accurately by a neural network, as discussed below.

3 Results and discussion of prediction in two directions

For practical prediction of beam tilts in two directions, we consider the prototype system under different combinations of parameters θ, f and δz. The tilting angle θ is chosen from 0°, 15°, 30°, 45° and 60°. The y = −x axis is chosen as the rotation axis of the CCD, allowing the spot to be stretched diagonally and contained in a smaller square pixel region. The focal length f is taken as 40, 60, 80, 100 or 120 mm. The spot images of the positive and negative defocus distances are symmetrical about the focal plane, so only the positive defocus distances δz/Z R2 = 0, 0.5, 1, 1.5 or 2 are considered. The positive defocus distance δz/Z R2 is set to a constant here, so that the spot radius can be changed with different focal length f. Finally, 125 generated datasets are substituted into the neural network to obtain the prediction performances of the beam tilts.

We first analyze the prediction performances at different tilting angles θ. As can be seen from each curve in Figures 3(a) and 3(b), the prediction is generally better as the tilting angle increases. It can be understood that larger spot size and intensity contrast due to the tilted CCD give the neural network better performance. In addition, for the vertical CCD with θ =  0°, the prediction performances E mean are always very large around 0.38, regardless of the focal lengths and the defocus distances, indicating that the introduction of a tilted CCD is essential to decouple the position and angular errors.

Figure 3 Prediction performances E mean with different focal lengths f, tilting angles θ and defocus distances δz. (a) and (b) show spot samples and the prediction performance with typical focal lengths f = 40 and 100 mm, respectively. The partially enlarged plot in the dotted rectangle represents the prediction results for θ = 60°. (c) and (d) show the prediction performance when the tilting angles are 45° and 60°, respectively.

To clearly elucidate the effect of the defocus distance, we focus on the prediction performances with the tilting angles θ =  45° and 60° as shown in Figures 3(c) and 3(d). In these cases, the factors that improve the neural network performance, such as larger spot size and image contrast, are competing with each other. A larger defocus distance can increase the spot size, but reduce other positive effects. For f ≥ 60 mm, the spot size may be sufficient for the network, and other factors play the leading role. Hence, the prediction performances deteriorate with the defocus distance. When f = 40 mm, the spot size is so small (5 × 5 for δz = 0, θ = 60° as shown in Figure 3(a)) that it plays a greater role in prediction. The two curves therefore show a downward trend, or zigzag downward.

Similarly, we can derive the influence of the focal length from Figures 3(c) and 3(d). With a large defocus distance (δz/Z R2 = 1, 1.5 or 2), the prediction result obviously gets worse as the focal length increases. When the defocus distance is further reduced (δz/Z R2 = 0 or 0.5), a larger focal length (60 or 80 mm) achieves a better prediction result, while focal lengths of 40 and 120 mm at both ends give worse prediction results. As the focal length decreases, the negative effect of spot size is magnified, as well as the positive effect from other factors. When the spot size is insufficient due to the small defocus distance, the two effects are comparable and then balanced at the larger focal length.

According to the analysis above, some useful rules can be drawn. The position and angular errors cannot be decoupled by spot images on the vertical CCD. A tilted CCD can help to solve this problem, and better prediction performance can be acquired at a larger tilting angle of the CCD. A smaller focal length and defocus distance have greater potential in prediction, but they may also cause a too small spot size, which leads to performance degradation. Factors that can increase the spot size may improve the performance. For the pixel size and error ranges in the prototype system, the optimal parameter combination is the focal length f of 60 mm, defocus distance δz of 0 and tilting angle θ of 60°.

We briefly discuss the potential errors in this method below. Due to its data-based character[Reference Hornic15], the neural network technique may have an anti-interference ability in some systems[Reference Wizinowich, Lloyd-Hart and McLeod19, Reference Guo, Korablinova, Ren and Bille21]. With the distinct difference in the images provided by the tilted CCD, the current method is expected to find a one-to-one correspondence between images and pointing errors from complex systems with some disturbances. For actual detection noise (far less than 0.5 introduced during A/D conversion), the influence on the prediction is considered to be rather small. For more complex wavefront errors induced by turbulence and thermal effects in the propagation process, however, the impact on our technique requires further study.

4 Conclusions

In this paper, we provide a neural network method for the decoupling of position and angular errors of a laser beam in laser-pointing systems. With a novel setup, including an appropriate small focal length lens and tilting the detector at the focal plane, the position and angular errors can be predicted from the intensity distribution of a single spot image. Compared to the common centroid method, this method has a more concise structure and great potential for high-precision measurement through both optical design and data analysis. It may be useful when both the position and angular errors are needed, or when real-time feature and complexity of systems are rigorously required, such as precise optical systems or multi-beam monitoring.

Acknowledgements

The authors would like to thank Qiang Gao for helpful discussions.

References

Yin, J., Ren, J., Lu, Y. Cao, H. Yong, Y. Wu, C. Liu, S. Liao, F. Zhou, Y. Jiang, X. Cai, P. Xu, G. Pan, J. Jia, Y. Huang, H. Yin, J. Wang, Y. Chen, C. Peng, and J. Pan, H.Nature 488, 185 (2012).CrossRefGoogle Scholar
Wilhelmsen, K., Awwal, A., Brunton, S. Burkhart, D. McGuigan, V. M. Kamm, R. Leach Jr., R. Lowe-Webb, and R. Wilson, G.Fusion. Eng. Des. 87, 1989 (2012).Google Scholar
Genoud, G., Wojda, F., Burza, M., Persson, A., and Wahlström, C.-G., Rev. Sci. Instrum. 82, 033102 (2011).Google Scholar
Shirinzadeh, B., Teoh, P. L., Tian, Y., Dalvand, M. M., Zhong, Y., and Liaw, H. C., Robot. Cim-Int. Manuf. 26, 74 (2010).Google Scholar
Beerer, M. J., Yoon, H., and Agrawal, B. N., Control. Eng. Pract. 21, 122 (2013).Google Scholar
Koujelev, A. S. and Dudelzak, A. E., Opt. Eng. 47, 085003 (2008).Google Scholar
Anderson, E. H., Blankinship, R. L., Fowler, L. P., Glaese, R. M., and Janzen, P. C., Proc. SPIE 6569, 65690Q (2007).Google Scholar
Moon, I., Lee, S., and Cho, M. K., Proc. SPIE 5877, 58770I (2005).Google Scholar
Dawkins, S. T. and Luiten, A. N., Appl. Opt. 47, 1239 (2008).Google Scholar
Zhao, W., Tan, J., Qiu, L., Zou, L., Cui, J., and Z., Shi, Rev. Sci. Instrum. 76, 036101 (2005).Google Scholar
Pan, J., Viatella, J., Das, P. P., and Yamasaki, Y., Proc. SPIE 5377, 1894 (2004).Google Scholar
Lublin, L., Warkentin, D., Das, P. P., Ershov, A. I., Vipperman, J., Spangler, R. L., and Klene, B., Proc. SPIE 5040, 1682 (2003).Google Scholar
Zhou, Q., Ben-Tzvi, P., Fan, D., and Goldenberg, A. A., in 2008 International Workshop on Robotic and Sensors Environments (IEEE, 2008), p. 149.Google Scholar
Merritt, P. H. and Albertine, J. R., Opt. Eng. 52, 021005 (2012).Google Scholar
Hornic, K., Neural Networks 2, 359 (1989).CrossRefGoogle Scholar
LeCun, Y., Bengio, Y., and Hinton, G., Nature 521, 436 (2015).CrossRefGoogle Scholar
Sandler, D. G., Barrett, T. K., Palmer, D. A., Fugate, R. Q., and Wild, W. J., Nature 351, 300 (1991).CrossRefGoogle Scholar
Angel, J. R. P., Wizinowich, P., Lloyd-Hart, M., and Sandler, D., Nature 348, 221 (1990).CrossRefGoogle Scholar
Wizinowich, P. L., Lloyd-Hart, M., McLeod, D. Colucci, R. Dekany, D. Wittman, R. Angel, D. McCarthy, B. Hulburd, and D. Sandler, B. Proc. SPIE 1542, 148 (1991).Google Scholar
Breitling, F., Weigel, R. S., Downer, M. C., and Tajima, T., Rev. Sci. Instrum. 72, 1339 (2001).Google Scholar
Guo, H., Korablinova, N., Ren, Q., and Bille, J., Opt. Express 14, 6456 (2006).CrossRefGoogle Scholar
Abbasi, N. A., Landolsi, T., and Dhaouadi, R., Mechatronics 25, 44 (2015).CrossRefGoogle Scholar
Yu, H., Rossi, G., Braglia, A., and Perrone, G., Appl. Opt. 55, 6530 (2016).Google Scholar
Xia, L., Gao, Y., and Han, X., Opt. Commun. 387, 281 (2017).Google Scholar
Nielsen, M. A., Neural Networks and Deep Learning (Determination Press, 2015).Google Scholar
Rumelhart, D. E., Hinton, G. E., and Williams, R. J., Nature 323, 533 (1986).CrossRefGoogle Scholar
Figure 0

Figure 1 Prototype laser-pointing system. S is the laser source; L is the thin lens; M is the spot image on the CCD; T is the beam tilt of the waist center on the source plane; a0, θx are the position offset and inclination angle of the beam relative to the optical axis in the x direction, respectively, and b0, θy are those in the y direction; u is the distance between the source plane and the lens; f is the focal length; δz is the defocus distance of the CCD. The optical axis of the system is along the z direction.

Figure 1

Figure 2 Prediction errors Ej for all epochs, spot image M2 and image difference for beam tilts in the x direction. (a) and (b) show the prediction errors, the spot image and image difference on the vertical CCD, respectively. (c) and (d) show those on the tilted CCD with a rotation of 60° around the y-axis. (e) and (f) show those on the tilted CCD with a rotation of 60° around the x-axis.

Figure 2

Figure 3 Prediction performances Emean with different focal lengths f, tilting angles θ and defocus distances δz. (a) and (b) show spot samples and the prediction performance with typical focal lengths f = 40 and 100 mm, respectively. The partially enlarged plot in the dotted rectangle represents the prediction results for θ = 60°. (c) and (d) show the prediction performance when the tilting angles are 45° and 60°, respectively.