Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Sun, Wenli
Gao, Xu
and
Yu, Yanli
2020.
Dual Deep Neural Networks for Improving Trajectory Tracking Control of Unmanned Surface Vehicle.
p.
3441.
Qu, Xingru
Liang, Xiao
Hou, Yuanhang
Li, Ye
and
Zhang, Rubo
2020.
Finite-time sideslip observer-based synchronized path-following control of multiple unmanned underwater vehicles.
Ocean Engineering,
Vol. 217,
Issue. ,
p.
107941.
Cabrera-Ponce, A. A.
and
Martinez-Carranza, J.
2020.
Pattern Recognition.
Vol. 12088,
Issue. ,
p.
195.
Hu, Kai
Chen, Xu
Xia, Qingfeng
Jin, Junlan
and
Weng, Liguo
2021.
A Control Algorithm for Sea–Air Cooperative Observation Tasks Based on a Data-Driven Algorithm.
Journal of Marine Science and Engineering,
Vol. 9,
Issue. 11,
p.
1189.
Dong, Danling
Wu, Libo
and
Su, Jian
2021.
Implementation of English “Online and Offline” Hybrid Teaching Recommendation Platform Based on Reinforcement Learning.
Security and Communication Networks,
Vol. 2021,
Issue. ,
p.
1.
Zhao, Wenlong
Meng, Zhijun
Wang, Kaipeng
Zhang, Jiahui
and
Lu, Shaoze
2021.
Hierarchical Active Tracking Control for UAVs via Deep Reinforcement Learning.
Applied Sciences,
Vol. 11,
Issue. 22,
p.
10595.
Kooi, Jacob E.
and
Babuska, Robert
2021.
Inclined Quadrotor Landing using Deep Reinforcement Learning.
p.
2361.
León, Benjamin L.
Rimoli, Julian J.
and
Di Leo, Claudio V.
2021.
Rotorcraft Dynamic Platform Landings Using Robotic Landing Gear.
Journal of Dynamic Systems, Measurement, and Control,
Vol. 143,
Issue. 11,
Abo Mosali, Najmaddin
Shamsudin, Syariful Syafiq
Alfandi, Omar
Omar, Rosli
and
Al-Fadhali, Najib
2022.
Twin Delayed Deep Deterministic Policy Gradient-Based Target Tracking for Unmanned Aerial Vehicle With Achievement Rewarding and Multistage Training.
IEEE Access,
Vol. 10,
Issue. ,
p.
23545.
Gao, Weiwei
Li, Xiaofeng
Wang, Yanwei
and
Cai, Yingjie
2022.
Medical Image Segmentation Algorithm for Three-Dimensional Multimodal Using Deep Reinforcement Learning and Big Data Analytics.
Frontiers in Public Health,
Vol. 10,
Issue. ,
Abo Mosali, Najmaddin
Shamsudin, Syariful Syafiq
Mostafa, Salama A.
Alfandi, Omar
Omar, Rosli
Al-Fadhali, Najib
Mohammed, Mazin Abed
Malik, R. Q.
Jaber, Mustafa Musa
and
Saif, Abdu
2022.
An Adaptive Multi-Level Quantization-Based Reinforcement Learning Model for Enhancing UAV Landing on Moving Targets.
Sustainability,
Vol. 14,
Issue. 14,
p.
8825.
Bartolomei, Luca
Kompis, Yves
Teixeira, Lucas
and
Chli, Margarita
2022.
Autonomous Emergency Landing for Multicopters using Deep Reinforcement Learning.
p.
3392.
Ghasemi, Ali
Parivash, Farhad
and
Ebrahimian, Serajeddin
2022.
Autonomous landing of a quadrotor on a moving platform using vision-based FOFPID control.
Robotica,
Vol. 40,
Issue. 5,
p.
1431.
Sun, Xiao
Naito, Hiroshi
Namiki, Akio
Liu, Yang
Matsuzawa, Takashi
and
Takanishi, Atsuo
2022.
Assist system for remote manipulation of electric drills by the robot “WAREC-1R” using deep reinforcement learning.
Robotica,
Vol. 40,
Issue. 2,
p.
365.
Li, Zhan
Li, Chunxu
Li, Shuai
Zhu, Shuo
and
Samani, Hooman
2022.
A sparsity-based method for fault-tolerant manipulation of a redundant robot.
Robotica,
Vol. 40,
Issue. 10,
p.
3396.
Zhao, Xianli
and
Wang, Guixin
2022.
Deep Q networks-based optimization of emergency resource scheduling for urban public health events.
Neural Computing and Applications,
Li, Wenzhan
Ge, Yuan
Guan, Zhihong
and
Ye, Gang
2022.
Synchronized Motion-Based UAV–USV Cooperative Autonomous Landing.
Journal of Marine Science and Engineering,
Vol. 10,
Issue. 9,
p.
1214.
Li, Wenzhan
Ge, Yuan
Guan, Zhihong
Gao, Hongbo
and
Feng, Haoyu
2023.
NMPC-based UAV-USV cooperative tracking and landing.
Journal of the Franklin Institute,
Vol. 360,
Issue. 11,
p.
7481.
Meng, Xiangdong
Xi, Haoyang
Wei, Jinghe
He, Yuqing
Han, Jianda
and
Song, Aiguo
2023.
Rotorcraft aerial vehicle’s contact-based landing and vision-based localization research.
Robotica,
Vol. 41,
Issue. 4,
p.
1127.
Deniz, Sabrullah
Wu, Yufei
Shi, Yang
and
Wang, Zhenbo
2023.
Autonomous Landing of eVTOL Vehicles via Deep Q-Networks.