Hostname: page-component-6587cd75c8-mppm8 Total loading time: 0 Render date: 2025-04-24T09:16:34.679Z Has data issue: false hasContentIssue false

Robot skill acquisition for precision assembly of flexible flat cable with force control

Published online by Cambridge University Press:  31 October 2024

Xiaogang Song
Affiliation:
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China Guangdong Key Laboratory of Intelligent Morphing Mechanisms and Adaptive Robotics, Harbin Institute of Technology, Shenzhen, China School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China
Peng Xu*
Affiliation:
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China Guangdong Key Laboratory of Intelligent Morphing Mechanisms and Adaptive Robotics, Harbin Institute of Technology, Shenzhen, China School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China
Wenfu Xu
Affiliation:
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China Guangdong Key Laboratory of Intelligent Morphing Mechanisms and Adaptive Robotics, Harbin Institute of Technology, Shenzhen, China School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China
Bing Li
Affiliation:
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China Guangdong Key Laboratory of Intelligent Morphing Mechanisms and Adaptive Robotics, Harbin Institute of Technology, Shenzhen, China School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China
Lei Qin
Affiliation:
Guangdong HUIBO Robot Technology Co., Ltd., Foshan, China
*
Corresponding author: Peng Xu; Email: [email protected]

Abstract

The flexible flat cable (FFC) assembly task is a prime challenge in electronic manufacturing. Its characteristics of being prone to deformation under external force, tiny assembly tolerance, and fragility impede the application of robotic assembly in this field. To achieve reliable and stable robotic automation assembly of FFC, an efficient assembly skill acquisition strategy is presented by combining a parallel robot skill learning algorithm with adaptive impedance control. The parallel robot skill learning algorithm is proposed to enhance the efficiency of FFC assembly skill acquisition, which reduces the risk of damaging FFC and tackles the uncertain influence resulting from deformation during the assembly process. Moreover, FFC assembly is also a complex contact-rich manipulation task. An adaptive impedance controller is designed to implement force tracking during the assembly process without precise environment information, and the stability is also analyzed based on the Lyapunov function. Experiments of FFC assembly are conducted to illustrate the efficiency of the proposed method. The experimental results demonstrate that the proposed method is robust and efficient.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Chapman, J., Gorjup, G., Dwivedi, A., Matsunaga, S., Mariyama, T., MacDonald, B. and Liarokapis, M., “A locally-adaptive, parallel-jaw gripper with clamping and rolling capable, soft fingertips for fine manipulation of flexible flat cables,” In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China (2021) pp. 69416947. doi: 10.1109/ICRA48506.2021.9561970.Google Scholar
Yao, Y. L. and Cheng, W. Y., “Model-based motion planning for robotic assembly of non-cylindrical parts,” Int. J. Adv. Manuf. Technol. 15(9), 683691 (1999). doi: 10.1007/s001700050119.CrossRefGoogle Scholar
Park, H., Park, J., Lee, D., Park, J., Baeg, M. and Bae, J., “Compliance-based robotic peg-in-hole assembly strategy without force feedback,” IEEE Trans. Ind. Electron. 6(28), 62996309 (2017). doi: 10.1109/TIE.2017.2682002.CrossRefGoogle Scholar
Zhang, Z., Zhang, Z., Jin, X. and Zhang, Q., “A novel modelling method of geometric errors for precision assembly,” Int. J. Adv. Manuf. Technol. 94(1-4), 11391160 (2018). doi: 10.1007/s00170-017-0936-3.CrossRefGoogle Scholar
Tang, T., Lin, H., Zhao, Y., Chen, W. and Tomizuka, M., “Autonomous alignment of peg and hole by force/torque measurement for robotic assembly,” In: 2016 EEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA (2016) pp. 162167. doi: 10.1109/COASE.2016.7743375.Google Scholar
Duque, D. A., Prieto, F. A. and Hoyos, J. G., “Trajectory generation for robotic assembly operations using learning by demonstration,” Robot. Comput.-Integr. Manuf. 57, 292302 (2019). doi: 10.1016/j.rcim.2018.12.007.CrossRefGoogle Scholar
Kramberger, A., Piltaver, R., Nemec, B., Gams, M. and Ude, A., “Learning of assembly constraints by demonstration and active exploration,” Ind. Robot. 43(5), 524534 (2016). doi: 10.1108/IR-02-2016-0058.CrossRefGoogle Scholar
Su, J., Meng, Y., Wang, L. and Yang, X., “Learning to assemble noncylindrical parts using trajectory learning and force tracking,” IEEE-ASME Trans. Mechatron. 27(5), 31323143 (2022). doi: 10.1109/TMECH.2021.3110825.CrossRefGoogle Scholar
Abu-Dakka, F. J., Nemec, B., Kramberger, A., Buch, A. G., Krüger, N. and Ude, A., “Solving peg-in-hole tasks by human demonstration and exception strategies,” Ind. Robot. 41(6), 575584 (2014). doi: 10.1108/IR-07-2014-0363.CrossRefGoogle Scholar
Roveda, L., Magni, M., Cantoni, M., Piga, D. and Bucca, G., “Human-robot collaboration in sensorless assembly task learning enhanced by uncertainties adaptation via bayesian optimization,” Robot. Auton. Syst. 136, 103711 (2021). doi: 10.1016/j.robot.2020.103711.CrossRefGoogle Scholar
Hou, Z., Fei, J., Deng, Y. and Xu, J., “Data-efficient hierarchical reinforcement learning for robotic assembly control applications,” IEEE Trans. Ind. Electron. 68(11), 1156511575 (2021). doi: 10.1109/TIE.2020.3038072.CrossRefGoogle Scholar
Ma, Y., Xie, Y., Zhu, W. and Liu, S., “An efficient robot precision assembly skill learning framework based on several demonstrations,” IEEE Trans. Autom. Sci. Eng. 20(1), 124136 (2023). doi: 10.1109/TASE.2022.3144282.CrossRefGoogle Scholar
Roveda, L., Maskani, J. and Franceschi, P., “Model-based reinforcement learning variable impedance control for human-robot collaboration,” J. Intell. Robot. Syst. 100(2), 417433 (2020). doi: 10.1007/s10846-020-01183-3.CrossRefGoogle Scholar
Roveda, L., Testa, A., Shahid, A. A., Braghin, F. and Piga, D., “Q-Learning-based model predictive variable impedance control for physical human-robot collaboration,” Artif. Intell. 312, 103771 (2022). doi: 10.1016/j.artint.2022.103771.CrossRefGoogle Scholar
Inoue, T., De Magistris, G., Munawar, A., Yokoya, T. and Tachibana, R., "Deep reinforcement learning for high precision assembly tasks,” In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada (2017) pp. 819825. doi: 10.1109/IROS.2017.8202244.Google Scholar
Xu, J., Hou, Z., Wang, W., Xu, B., Zhang, K. and Chen, K., “Feedback deep deterministic policy gradient with fuzzy reward for robotic multiple peg-in-hole assembly tasks,” IEEE Trans. Ind. Inform. 15(3), 16581667 (2019). doi: 10.1109/TII.2018.2868859.CrossRefGoogle Scholar
Luo, J., Solowjow, E., Wen, C., Ojea, J. A., Agogino, A. M., Tamar, A. and Abbeel, P., “Reinforcement learning on variable impedance controller for high-precision robotic assembly,” In: 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada (2019) pp. 30803087. doi: 10.1109/ICRA.2019.8793506.Google Scholar
Shi, Y., Chen, Z., Liu, H., Riedel, S., Gao, C., Feng, Q., Deng, J. and Zhang, J., “Proactive action visual residual reinforcement learning for contact-rich tasks using a torque-controlled robot,” In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China (2021) pp. 765771. doi: 10.1109/ICRA48506.2021.9561162.Google Scholar
Li, F., Jiang, Q., Zhang, S., Wei, M. and Song, R., “Robot skill acquisition in assembly process using deep reinforcement learning,” Neurocomputing. 345, 92102 (2019). doi: 10.1016/j.neucom.2019.01.087.CrossRefGoogle Scholar
Schoettler, G., Nair, A., Luo, J., Bahl, S., Ojea, J. A., Solowjow, E. and Levine, S., “Deep reinforcement learning for industrial insertion tasks with visual inputs and natural rewards,” In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA (2020) pp. 55485555. doi: 10.1109/IROS45743.2020.9341714.Google Scholar
Liu, T., Tian, B., Ai, Y., Li, L.,Cao, D. and Wang, F. -Y., “Parallel reinforcement learning: A framework and case study, IEEE-CAA,” J. Automatica Sin. 5(4), 827835 (2018). doi: 10.1109/JAS.2018.7511144.CrossRefGoogle Scholar
Hogan, N., “Impedance control: An approach to manipulator,” ASME J. Dyna. Syst. Measure. Control. 107(1), 124 (1985). doi: https://doi.org/10.1115/1.3140702.CrossRefGoogle Scholar
Roveda, L., Riva, D., Bucca, G. and Piga, D., “Sensorless optimal switching impact/force controller,” IEEE Access. 9, 58167158184 (2021). doi: 10.1109/ACCESS.2021.3131390.CrossRefGoogle Scholar
Roveda, L., Pallucca, G., Pedrocchi, N., Braghin, F. and Tosatti, L. M., “Iterative learning procedure with reinforcement for high-accuracy force tracking in robotized tasks,” IEEE Trans. Ind. Inform. 14(4), 17531763 (2018). doi: 10.1109/TII.2017.2748236.CrossRefGoogle Scholar
Wang, P., Zhang, D. and Lu, B., “Collision detection and force control based on the impedance approach and dynamic modelling,” Ind. Robot. 47(6), 813824 (2020). doi: 10.1108/IR-08-2019-0163.CrossRefGoogle Scholar
Chen, Z., Guo, Q., Shi, Y. and Yan, Y., “Distributed cooperative control from position motion to interaction synchronization,” In: 2022 American Control Conference (ACC), Atlanta, GA, USA (2022) pp. 38443849. doi: 10.23919/ACC53348.2022.9867144.Google Scholar
Shu, X., Ni, F., Min, K., Liu, Y. and Liu, H., “An adaptive force control architecture with fast-response and robustness in uncertain environment,” In: An adaptive force control architecture with fast-response and robustness in uncertain environment,” In: 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China (2021) pp. 10401045. doi: 10.1109/ROBIO54168.2021.9739648.Google Scholar
Duan, J., Gan, Y., Chen, M. and Dai, X., “Symmetrical adaptive variable admittance control for position/force tracking of dual-arm cooperative manipulators with unknown trajectory deviations,” Robot. Comput.-Integr. Manuf. 57, 357369 (2019). doi: doi.org/10.1016/j.rcim.2018.12.012.Google Scholar
Xu, K., Wang, S., Yue, B., Wang, J., Peng, H., Liu, D., Chen, Z. and Shi, M., “Adaptive impedance control with variable target stiffness for wheel-legged robot on complex unknown terrain,” Mechatronics. 69,102388 (2020). doi: 10.1016/j.mechatronics.2020.102388.CrossRefGoogle Scholar
Narang, Y., Sundaralingam, B., Macklin, M., Mousavian, A. and Fox, D., “Sim-to-real for robotic tactile sensing via physics-based simulation and learned latent projections,” In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China (2021) pp. 64446451. doi: 10.1109/ICRA48506.2021.9561969.Google Scholar
Shahid, A. A., Narang, Y., Petrone, V., Ferrentino, E., Handa, A., Fox, D., Pavone, M. and Roveda, L., “Scaling population-based reinforcement learning with GPU accelerated simulation (2024). arXiv preprint arXiv:2404.03336v3.Google Scholar
Zhang, Z., Pan, Z. and Kochenderfer, M. J., “Weighted double Q-learning,” In: 26th International Joint Conference on Artificial Intelligence (IJCAI’17), AAAI Press (2017) pp. 34553461.Google Scholar
Ferraguti, F., Landi, C. T., Sabattini, L., Bonfè, M., Fantuzzi, C. and Secchi, C., “A variable admittance control strategy for stable physical human-robot interaction,” Int. J. Robot. Res. 38(6), 747765 (2019). doi: 10.1177/0278364919840415.CrossRefGoogle Scholar
Duan, J., Gan, Y., Chen, M. and Dai, X., “Adaptive variable impedance control for dynamic contact force tracking in uncertain environment,” Robot. Auton. Syst. 102, 5465 (2018). doi: 10.1016/j.robot.2018.01.009.CrossRefGoogle Scholar
Jung, S., Hsia, T. C. and Bonitz, R. G., “Force tracking impedance control of robot manipulators under unknown environment,” IEEE Trans. Control Syst. Technol. 12(3), 474483 (2004). doi: 10.1109/TCST.2004.824320.CrossRefGoogle Scholar
Seraji, H. and Colbaugh, R., “Force tracking in impedance control,” Int. J. Robot. Res. 16(1), 97117 (1997). doi: 10.1177/027836499701600107.CrossRefGoogle Scholar
Seraji, H., “Decentralized adaptive control of manipulators: Theory, simulation, and experimentation,” IEEE Trans. Robot. Autom. 5(2), 183201 (1989). doi: 10.1109/70.88039.CrossRefGoogle Scholar
I-PEX Co., Ltd. (2022). EVAFLEX 5-SE-VT product specifications, pp. 4. Available at: https://www.i-pex.com/sites/ https://www.i-pex.com/sites/.Google Scholar
Supplementary material: File

Song et al. supplementary material

Song et al. supplementary material
Download Song et al. supplementary material(File)
File 9.8 MB