Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-28T09:24:30.308Z Has data issue: false hasContentIssue false

Position control of a planar cable-driven parallel robot using reinforcement learning

Published online by Cambridge University Press:  17 March 2022

Caner Sancak*
Affiliation:
Department of Mechanical Engineering, Karadeniz Technical University, Trabzon, Turkey
Fatma Yamac
Affiliation:
Department of Mechanical Engineering, Tarsus University, Mersin, Turkey
Mehmet Itik
Affiliation:
Department of Mechanical Engineering, Izmir Democracy University, Izmir, Turkey
*
*Corresponding author. E-mail: [email protected]

Abstract

This study proposes a method based on reinforcement learning (RL) for point-to-point and dynamic reference position tracking control of a planar cable-driven parallel robots, which is a multi-input multi-output system (MIMO). The method eliminates the use of a tension distribution algorithm in controlling the system’s dynamics and inherently optimizes the cable tensions based on the reward function during the learning process. The deep deterministic policy gradient algorithm is utilized for training the RL agents in point-to-point and dynamic reference tracking tasks. The performances of the two agents are tested on their specifically trained tasks. Moreover, we also implement the agent trained for point-to-point tasks on the dynamic reference tracking and vice versa. The performances of the RL agents are compared with a classical PD controller. The results show that RL can perform quite well without the requirement of designing different controllers for each task if the system’s dynamics is learned well.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Qian, S., Zi, B., Shang, W. W. and Xu, Q. S., “A review on cable-driven parallel robots,” Chin. J. Mech. Eng. 31, 66 (2018).CrossRefGoogle Scholar
Kumar, R. and Mukherjee, S., “Enhanced dynamic capability of cable-driven parallel manipulators by reconfiguration,” Robotica 39(12), 21532171 (2021).CrossRefGoogle Scholar
Williams, R. L., Gallina, P. and Vadia, J., “Planar translational cable-direct-driven robots,” J. Robotic. Syst. 20, 107120 (2003).CrossRefGoogle Scholar
Sancak, C., Yamaç, F. and Itik, M., “Forward kinematics and control of a planar cable driven parallel robot,” Konya J. Eng. Sci. 7, 862874 (2019). doi: 10.36306/konjes.622283.CrossRefGoogle Scholar
Miermeister, P. and Pott, A., “Modelling and Real-Time Dynamic Simulation of the Cable-Driven Parallel Robot IPAnema,” In: New Trends in Mechanism Science (Pisla, D., Ceccarelli, M., Husty, M. and Corves, B., eds). (Springer Netherlands, Dordrecht, 2010) pp. 353360.CrossRefGoogle Scholar
Carricato, M. and Merlet, J. P., “Stability analysis of underconstrained cable-driven parallel robots,” IEEE Trans. Robot. 29(1), 288296 (2013).CrossRefGoogle Scholar
Diao, X. and Ma, O., “Vibration analysis of cable-driven parallel manipulators,” Multibody Syst. Dyn. 21, 347360 (2009).CrossRefGoogle Scholar
Sancak, C. and Itik, M., “Out-of-plane vibration suppression and position control of a planar cable-driven robot,” IEEE/ASME Trans. Mechatron. 1 (2021). doi: 10.1109/TMECH.2021.3089588.Google Scholar
Khosravi, M. A. and Taghirad, H. D., “Robust PID control of fully-constrained cable driven parallel robots,” Mechatronics 24(2), 8797 (2014).CrossRefGoogle Scholar
Oyman, E. L., Korkut, M. Y., Yılmaz, C., Bayraktaroglu, Z. Y. and Arslan, M. S., “Design and control of a cable-driven rehabilitation robot for upper and lower limbs,” Robotica 40(1), 137 (2021).CrossRefGoogle Scholar
Korayem, M. H., Tourajizadeh, H., Jalali, M. and Omidi, E., “Optimal path planning of spatial cable robot using optimal sliding mode control,” Int. J. Adv. Robot. Syst. 9, 168 (2012).CrossRefGoogle Scholar
Bak, J.-H., Yoon, J. H., Hwang, S. W. and Park, J. H., “Sliding-Mode Control Of Cable-Driven Parallel Robots with Elastic Cables, In: 2016 16th International Conference on Control, Automation and Systems (ICCAS), (IEEE, Gyeongju, South Korea, 2016) pp. 10571060.Google Scholar
Bayani, H., Masouleh, M. T. and Kalhor, A., “An experimental study on the vision-based control and identification of planar cable-driven parallel robots,” Robot. Auton. Syst. 75(4), 187202 (2016).CrossRefGoogle Scholar
Shang, W., Zhang, B., Zhang, B., Zhang, F. and Cong, S., “Synchronization control in the cable space for cable-driven parallel robots,” IEEE Trans. Ind. Electron. 66(6), 45444554 (2018).CrossRefGoogle Scholar
Zhang, B., Shang, W., Cong, S. and Li, Z., “Coordinated dynamic control in the task space for redundantly actuated cable-driven parallel robots,” IEEE/ASME Trans. Mechatron. 26(5), 23962407 (2020).CrossRefGoogle Scholar
Wang, Y. L., Wang, K. Y., Chai, Y. J., Mo, Z. J. and Wang, K. C., “Research on mechanical optimization methods of cable-driven lower limb rehabilitation robot,” Robotica 40(1), 154169 (2021).CrossRefGoogle Scholar
Pott, A., “An Improved Force Distribution Algorithm for Over-Constrained Cable-Driven Parallel Robots,” In: Computational Kinematics (Thomas, F. and Gracia, A. P., eds.) (Springer Netherlands, Dordrecht, 2014) pp. 139146.CrossRefGoogle Scholar
Gouttefarde, M., Lamaury, J., Reichert, C. and Bruckmann, T., “A versatile tension distribution algorithm for $n$ -DOF parallel robots driven by n + 2 cables,” IEEE Trans. Robot. 31(6), 14441457 (2015).CrossRefGoogle Scholar
Ouyang, B. and Shang, W., “Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables,” Chin J. Mech. Eng. 29, 231238 (2016).CrossRefGoogle Scholar
Ghasemi, A., “Application of linear model predictive control and input-output linearization to constrained control of 3D cable robots,” Modern Mech. Eng. 1, 6976 (2011).CrossRefGoogle Scholar
Li, C. D., Yi, J.-Q., Yu, Y. and Zhao, D.-B., “Inverse control of cable-driven parallel mechanism using type-2 fuzzy neural network,” Acta Autom. Sin. 36, 459464 (2010).CrossRefGoogle Scholar
Xiong, H., Zhang, L. and Diao, X., “A learning-based control framework for cable-driven parallel robots with unknown Jacobians,” Proc Inst. Mech. Eng. I J. Syst. Control Eng. 234, 10241036 (2020).Google Scholar
Akalin, N. and Loutfi, A., “Reinforcement learning approaches in social robotics,” Sensors 21, 1292 (2021).CrossRefGoogle ScholarPubMed
Zhang, T. and Mo, H., “Reinforcement learning for robot research: A comprehensive review and open issues,” Int. J. Adv. Robot. Syst. 18(3), 172988142110073 (2021).CrossRefGoogle Scholar
Sutton, R. S. and Barto, A. G.. Reinforcement Learning: An Introduction. 2nd edition (The MIT Press, Cambridge, MA, 2018).Google Scholar
Ma, T., Xiong, H., Zhang, L. and Diao, X., “Control of a Cable-Driven Parallel Robot via Deep Reinforcement Learning,” In: 2019 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO), IEEE, Beijing, China, IEEE, Beijing, China, (2019) pp. 275280.Google Scholar
Xiong, H., Ma, T., Zhang, L. and Diao, X., “Comparison of end-to-end and hybrid deep reinforcement learning strategies for controlling cable-driven parallel robots,” Neurocomputing 377, 7384 (2020).CrossRefGoogle Scholar
Oyekan, J. and Grimshaw, A., “Applying deep reinforcement learning to cable driven parallel robots for balancing unstable loads: a ball case study,” Front. Robot. AI 7, 195 (2021).Google Scholar
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. and Riedmiller, M., Playing Atari with deep reinforcement learning, 2013. arXiv: 1312.5602 [cs].Google Scholar
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D., “Human-level control through deep reinforcement learning,” Nature 518(7540), 529533 (2015).CrossRefGoogle ScholarPubMed
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D. and Wierstra, D., Continuous control with deep reinforcement learning, 2019. arXiv: 1509.02971 [cs, stat].Google Scholar
Fujimoto, S., Hoof, H. and Meger, D., “Addressing Function Approximation Error in Actor-Critic Methods, In: Proceedings of the International Conference on Machine Learning, PMLR (2018) pp. 15871596.Google Scholar
Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D. and Kavukcuoglu, K., “Asynchronous Methods for Deep Reinforcement Learning,” In: Proceedings of the International Conference on Machine Learning, PMLR (2016) pp. 19281937.Google Scholar
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D. and Riedmiller, M., “Deterministic Policy Gradient Algorithms,” In: Proceedings of the International Conference on Machine Learning, PMLR (2014) pp. 387395.Google Scholar