Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-11-12T19:46:14.020Z Has data issue: false hasContentIssue false

An optimal visual servo trajectory planning method for manipulators based on system nondeterministic model

Published online by Cambridge University Press:  04 February 2022

Ruolong Qi
Affiliation:
School of Mechanical Engineering, Shenyang Jianzhu University, Shenyang, Liaoning, China
Yuangui Tang*
Affiliation:
The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, Liaoning, China
Ke Zhang
Affiliation:
School of Mechanical Engineering, Shenyang Jianzhu University, Shenyang, Liaoning, China
*
*Corresponding author. E-mail: [email protected]

Abstract

When a manipulator captures its target by a visual servo system, uncertainties can arise because of mechanical system and visual sensors exist error. This paper proposes an intelligent method to predict the successful rate for a manipulator to capture its target with motion and sensor errors. Because the mapping between the joint space of the manipulator and the Cartesian space at the end of the manipulator is nonlinear, when there is a bounded error of the manipulator’s joint, the error range of the end motion is constantly changing with the different joint positions. And at the same time, the visual servo camera will also measure the target from different positions and postures, so as to produce measurement results with different error ranges. The unknown time-varying error property not only greatly affects the stability of the closed-loop control but also causes the capture failure. The purpose of this paper is to estimate the success probability of different capture trajectories by establishing the nondeterministic model of manipulator control system. First, a system model including motion subsystem and feedback subsystem was established with system error described by Gaussian probability. And then Bayesian estimation was introduced into the system model to estimate the executing state of the predefined trajectory. Linear least quadratic regulators (LQR) control is used to simulate the input correction in the closed-loop control between motion subsystem and feedback subsystem. At last, the successful probability of capturing the target is established by the Gaussian distribution at the end point of the trajectory with geometric relationship calculation between tolerance range and error distribution. The effectiveness and practicability of the proposed method are proved by simulation and experiment.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Ruolong Qi, born in 1983, is currently an associate professor at School of Mechanical Engineering, Shenyang Jianzhu University, China. He received his doctor degree from Shenyang Institute of Automation, Chinese Academy of Sciences in 2017. He received his bachelor and master degree from Dalian University of Technology, China, in 2005 and 2008. His research interests include robot system and intelligent robotics. Tel: +86-15904026145; E-mail: [email protected]

References

Wang, F., Liu, Z., Chen, C. and Mata, V., “Adaptive neural network-based visual servoing control for manipulator with unknown output non linearities,” Inf. Sci. 451(1), 1633 (2018).Google Scholar
Gu, J., Wang, H., Pan, Y. and Wu, Q., “Neural network based visual servo control for CNC load/unload manipulator,” Optik 126(23), 44894492 (2015).CrossRefGoogle Scholar
Miljkovic, Z., Mitic, M., Lazarevic, M. and Babic, B., “Neural network reinforcement learning for visual control of robot manipulators,” Exp. Syst. Appl. 40(5) 17211736 (2013).CrossRefGoogle Scholar
Chang, W. C., “Robotic assembly of smartphon e back shells with eye-in-hand visual servoing,” Rob. Comput. Integr. Manuf. 50(1), 102113 (2018).CrossRefGoogle Scholar
Khelloufi, A., Achour, N., Passama, R. and Cherubini, A.. “Sensor-based navigation of omnidirectional wheeled robots dealing with both collisions and occlusions,” Robotica 38(4), 617–638 (2020).CrossRefGoogle Scholar
Dong, G. and Zhu, Z. H., “Incremental i nverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris,” Adv. Space Res. 57(1), 15081514 (2016).CrossRefGoogle Scholar
Dong, G. and Zhu, Z. H., “Autonomous robo tic capture of non-cooperative target by adaptive extended Kalman filter based visual servo,” Acta Astronautica 122(1), 208218 (2016).Google Scholar
Dong, G. and Zhu, Z. H., “Position-based visual servo control of autonomous robotic manipulators,” Acta Astronautica 115(1), 291302 (2015).CrossRefGoogle Scholar
Phuoc, L. M., Martinet, P., Kim, H. and Lee, S., “Motion Planning for Non-Holonomic Mobile Manipulator Based Visual Servo Under Large Platform Movement Errors at Low Velocity,” Proceedings of the 17th World Congress, The International Federation of Automatic Control, Seoul, Korea, July 6–11 (2018) pp. 43124317.Google Scholar
Xie, H., Lynch, A. and Jagersand, M., “Dynamic IBVS of a rotary wing UAV using line features,” Robotica 34(9), 2009–2026 (2020).Google Scholar
Hua, C. C., Liu, Y. J. and Yang, Y. A.. “Image-based robotic control with unknown camera parameters and joint velocities,” Robotica 33(8), 1718–1730 (2015).CrossRefGoogle Scholar
Kosmopoulos, D. I., “Robust Jacobian matrix estimation for image-based visual servoing,” Rob. Comput. Integr. Manuf. 27(1), 8287 (2011).CrossRefGoogle Scholar
Jiang, W. and Wang, Z., “Calibration of visual model f or space manipulator with a hybrid LM-GA algorithm,” Mech. Syst. Signal Process. 66(1), 399409 (2016).CrossRefGoogle Scholar
Zhong, H., Miao, Z. Q., Wang, Y. N.. J. X. Mao, L. Li, H. Zhang, Y. J. Chen and R. Fierro, “A practical visual servo control for aerial manipulation using a spherical projection model,” IEEE Trans. Ind. Electron. 67(12), 10564–10574 (2020).CrossRefGoogle Scholar
Du Toit, N. E., Robot Motion Planning in Dynamic, Cluttered, and Uncertain Environments: The Partially Closed-Loop Receding Horizon Control Approach Ph.D. Dissertation (California Institute of Technology, 2010).CrossRefGoogle Scholar
Du Toit, N. E. and Burdick, J. W., “Robot motion planning in dynamic, uncertain environments,” IEEE Trans. Rob. 28(1), 101115 (2012).CrossRefGoogle Scholar
Patil, S., van den Berg, J. and Alterovitz, R., “Estimating Probability of Collision for Safe Motion Planning Under Gaussian Motion and Sensing Uncertainty,” IEEE International Conference on Robotics and Automation, Salt Lake City (2012)pp. 32383244.Google Scholar
van den Berg, J., Abbeel, P. and Goldberg, K., “LQG-MP: Optimized path planning for robots with motion uncertainty and imperfect state information,” Int. J. Rob. Res. 30(7), 895913 (2011).CrossRefGoogle Scholar
Sun, W., Patil, S. and Alterovitz, R., “High-frequency replanning under uncertainty using parallel sampling-based motion planning,” IEEE Trans. Rob. 31(1), 104–116 (2015).CrossRefGoogle Scholar
Sun, W., van den Berg, J. and Alterovitz, R., “Stochastic extended LQR: Optimization-based motion planning under uncertainty,” IEEE Trans. Autom. Sci. and Eng. 13(2), 437–447 (2016).Google Scholar
Qi, R. L., Zhou, W. J. and Wang, T. J., “Trajectory evaluation for manipulators with motion and sensor uncertainties,” Ind. Robot Int. J. 44(5), 658670 (2017).CrossRefGoogle Scholar
Kang, M., Chen, H. and Dong, J. X., “Adaptive visual servoin g with an uncalibrated camera using extreme learning machine and Q-leaning,” Neurocomputing 402(1), 384394 (2020).CrossRefGoogle Scholar
Pomares, J., Perea, I., Jara, C., García, G. and Torres, F., “Dynamic visual servo control of a 4-axis joint tool to track image trajectories during machining complex shapes,” Rob. Comput. Integr. Manuf. 29(4), 261270 (2013).CrossRefGoogle Scholar
Zhang, T., Liu, S. L., Tang, C. T. and Liu, J. H., “Registration technique for spatial curves based on multi-view reconstruction,” Int. J. Comput. Appl. Technol. 53(2), 142148 (2016).CrossRefGoogle Scholar