Crossref Citations
This Book has been
cited by the following publications. This list is generated based on data provided by Crossref.
Chen, Shuhang
Devraj, Adithya
Berstein, Andrey
and
Meyn, Sean
2021.
Revisiting the ODE Method for Recursive Algorithms: Fast Convergence Using Quasi Stochastic Approximation.
Journal of Systems Science and Complexity,
Vol. 34,
Issue. 5,
p.
1681.
Lu, Fan
Mehta, Prashant G.
Meyn, Sean P.
and
Neu, Gergely
2022.
Convex Analytic Theory for Convex Q-Learning.
p.
4065.
Zeng, Kevin
Linot, Alec J.
and
Graham, Michael D.
2022.
Data-driven control of spatiotemporal chaos with reduced-order neural ODE-based models and reinforcement learning.
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences,
Vol. 478,
Issue. 2267,
Lyu, Shanshan
2023.
Research on Digital Twin System Architecture for Intelligent Manufacturing Internal Control.
p.
539.
Chen, Wuxia
Banerjee, Taposh
George, Jemin
and
Busart, Carl
2023.
Reinforcement Learning with An Abrupt Model Change.
p.
3014.
Cooper, Austin
Bretas, Arturo
and
Meyn, Sean
2023.
Anomaly Detection in Power System State Estimation: Review and New Directions.
Energies,
Vol. 16,
Issue. 18,
p.
6678.
Sang, Jianghui
Ahmad Khan, Zaki
Yin, Hengfu
and
Wang, Yupeng
2023.
Reward shaping using directed graph convolution neural networks for reinforcement learning and games.
Frontiers in Physics,
Vol. 11,
Issue. ,
Cammardella, Neil
Bušić, Ana
and
Meyn, Sean P.
2023.
Kullback–Leibler-Quadratic Optimal Control.
SIAM Journal on Control and Optimization,
Vol. 61,
Issue. 5,
p.
3234.
Lauand, Caio Kalil
and
Meyn, Sean
2023.
Quasi-Stochastic Approximation: Design Principles With Applications to Extremum Seeking Control.
IEEE Control Systems,
Vol. 43,
Issue. 5,
p.
111.
Dorfler, Florian
Coulson, Jeremy
and
Markovsky, Ivan
2023.
Bridging Direct and Indirect Data-Driven Control Formulations via Regularizations and Relaxations.
IEEE Transactions on Automatic Control,
Vol. 68,
Issue. 2,
p.
883.
Lu, Fan
Mathias, Joel
Meyn, Sean
and
Kalsi, Karanjit
2023.
Convex Q-Learning in Continuous Time with Application to Dispatch of Distributed Energy Resources.
p.
1529.
Hindupur, Sai Sumedh R.
and
Borkar, Vivek S.
2023.
Online Parameter Estimation in Partially Observed Markov Decision Processes.
p.
1.
Lauand, Caio
Bušić, Ana
and
Meyn, Sean
2023.
Inverse Free Zap Stochastic Approximation Extended Abstract.
p.
1.
Zhao, Liqun
Gatsis, Konstantinos
and
Papachristodoulou, Antonis
2023.
Stable and Safe Reinforcement Learning via a Barrier-Lyapunov Actor-Critic Approach.
p.
1320.
Lauand, Caio Kalil
and
Meyn, Sean
2023.
The Curse of Memory in Stochastic Approximation.
p.
7803.
Dieuleveut, Aymeric
Fort, Gersende
Moulines, Eric
and
Wai, Hoi-To
2023.
Stochastic Approximation Beyond Gradient for Signal Processing and Machine Learning.
IEEE Transactions on Signal Processing,
Vol. 71,
Issue. ,
p.
3117.
Lu, Fan
and
Meyn, Sean P.
2023.
Convex Q Learning in a Stochastic Environment.
p.
776.
Rathinasabapathy, Rajan
Malik, Atique
and
Fickelscherer, Richard J.
2024.
Artificial Intelligence in Process Fault Diagnosis.
p.
389.
Grushkovskaya, Victoria
and
Ebenbauer, Christian
2024.
Step-Size Rules for Lie Bracket-Based Extremum Seeking With Asymptotic Convergence Guarantees.
IEEE Control Systems Letters,
Vol. 8,
Issue. ,
p.
1967.
Tiumentsev, Yu. V.
and
Zarubin, R. A.
2024.
Lateral Motion Control of a Maneuverable Aircraft Using Reinforcement Learning.
Optical Memory and Neural Networks,
Vol. 33,
Issue. 1,
p.
1.