Hostname: page-component-f554764f5-fnl2l Total loading time: 0 Render date: 2025-04-14T00:36:32.032Z Has data issue: false hasContentIssue false

Closed-loop supersonic flow control with a high-speed experimental deep reinforcement learning framework

Published online by Cambridge University Press:  11 April 2025

Haohua Zong*
Affiliation:
School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, PR China
Yun Wu
Affiliation:
School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, PR China
Jinping Li
Affiliation:
National Key Lab of Aerospace Power System and Plasma Technology, Air Force Engineering University, Xi’an 710038, PR China
Zhi Su
Affiliation:
National Key Lab of Aerospace Power System and Plasma Technology, Air Force Engineering University, Xi’an 710038, PR China
Hua Liang
Affiliation:
National Key Lab of Aerospace Power System and Plasma Technology, Air Force Engineering University, Xi’an 710038, PR China
*
Corresponding author: Haohua Zong, [email protected]

Abstract

Although active flow control based on deep reinforcement learning (DRL) has been demonstrated extensively in numerical environments, practical implementation of real-time DRL control in experiments remains challenging, largely because of the critical time requirement imposed on data acquisition and neural-network computation. In this study, a high-speed field-programmable gate array (FPGA) -based experimental DRL (FeDRL) control framework is developed, capable of achieving a control frequency of 1–10 kHz, two orders higher than that of the existing CPU-based framework (10 Hz). The feasibility of the FeDRL framework is tested in a rather challenging case of supersonic backward-facing step flow at Mach 2, with an array of plasma synthetic jets and a hot-wire acting as the actuator and sensor, respectively. The closed-loop control law is represented by a radial basis function network and optimised by a classical value-based algorithm (i.e. deep Q-network). Results show that, with only ten seconds of training, the agent is able to find a satisfying control law that increases the mixing in the shear layer by 21.2 %. Such a high training efficiency has never been reported in previous experiments (typical time cost: hours).

Type
JFM Papers
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Amico, E., Cafiero, G. & Iuso, G. 2022 Deep reinforcement learning for active control of a three-dimensional bluff body wake. Phys. Fluids 34, 105126.CrossRefGoogle Scholar
Benard, N., Pons-Prats, J., Periaux, J., Bugeda, G., Braud, P., Bonnet, J. & Moreau, E. 2016 a Turbulent separated shear flow control by surface plasma actuator: experimental optimization by genetic algorithm approach. Exp. Fluids 57 (2), 117.CrossRefGoogle Scholar
Benard, N., Sujar-Garrido, P., Bonnet, J.-P. & Moreau, E. 2016 b Control of the coherent structure dynamics downstream of a backward facing step by DBD plasma actuator. Intl J. Heat Fluid Flow 61, 158173.CrossRefGoogle Scholar
Brunton, S., Noack, B. & Koumoutsakos, P. 2020 Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 52 (1), 477508.CrossRefGoogle Scholar
Cattafesta, L. III & Sheplak, M. 2011 Actuators for active flow control. Annu. Rev. Fluid Mech. 43 (1), 247272.CrossRefGoogle Scholar
Deng, X., Hu, G.& Chen, W. 2024 Intelligent active flow control of long-span bridge deck using deep reinforcement learning integrated transfer learning. J. Wind Engng Indust. Aerodyn. 244, 105632.CrossRefGoogle Scholar
Dong, X., Hong, H., Deng, X., Zhong, W. & Hu, G. 2023 Surrogate model-based deep reinforcement learning for experimental study of active flow control of circular cylinder. Phys. Fluids 35, 105147.CrossRefGoogle Scholar
Fan, D., Yang, L., Wang, Z., Triantafyllou, M. & Karniadakis, G. 2020 Reinforcement learning for bluff body active flow control in experiments and simulations. Proc. Natl Acad. Sci. USA 117 (42), 2609126098.CrossRefGoogle ScholarPubMed
Gad-el Hak, M. 2000 Flow Control: Passive, Active, and Reactive Flow Management. Cambridge University Press.CrossRefGoogle Scholar
Jia, W. & Xu, H. 2024 Deep reinforcement learning-based active flow control of an elliptical cylinder: transitioning from an elliptical cylinder to a circular cylinder and a flat plate. Phys. Fluids 36, 074117.CrossRefGoogle Scholar
Kingma, D. 2014 Adam: a method for stochastic optimization. arXiv preprint arXiv: 1412.6980.Google Scholar
Kong, Y., Wu, Y., Zong, H. & Guo, S. 2022 Supersonic cavity shear layer control using spanwise pulsed spark discharge array. Phys. Fluids 34, 054113.CrossRefGoogle Scholar
Maceda, G. Y., Varon, E., Lusseyran, F. & Noack, B. R. 2023 Stabilization of a multi-frequency open cavity flow with gradient-enriched machine learning control. J. Fluid Mech. 955, A20.CrossRefGoogle Scholar
Mnih, V. et al. 2015 Human-level control through deep reinforcement learning. Nature 518 (7540), 529533.CrossRefGoogle ScholarPubMed
Paris, R., Beneddine, S. & Dandois, J. 2021 Robust flow control and optimal sensor placement using deep reinforcement learning. J. Fluid Mech. 913, A25.CrossRefGoogle Scholar
Pastoor, M., Henning, L., Noack, B.R., King, R. & Tadmor, G. 2008 Feedback shear layer control for bluff body drag reduction. J. Fluid Mech. 608, 161196.CrossRefGoogle Scholar
Pino, F., Schena, L., Rabault, J. & Mendez, M.A. 2023 Comparative analysis of machine learning methods for active flow control. J. Fluid Mech. 958, A39.CrossRefGoogle Scholar
Rabault, J., Kuchta, M., Jensen, A., Réglade, U. & Cerardi, N. 2019 Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control. J. Fluid Mech. 865, 281302.CrossRefGoogle Scholar
Rabault, J. & Kuhnle, A. 2019 Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach. Phys. Fluids 31, 094105.CrossRefGoogle Scholar
Ren, F., Rabault, J. & Tang, H. 2021 Applying deep reinforcement learning to active flow control in weakly turbulent conditions. Phys. Fluids 33, 037121.CrossRefGoogle Scholar
Shimomura, S., Sekimoto, S., Oyama, A., Fujii, K. & Nishida, H. 2020 Experimental study on application of distributed deep reinforcement learning to closed-loop flow separation control over an airfoil. AIAA Scitech 2020 Forum. https://doi.org/10.2514/6.2020-0579CrossRefGoogle Scholar
Sonoda, T., Liu, Z., Itoh, T. & Hasegawa, Y. 2023 Reinforcement learning of control strategies for reducing skin friction drag in a fully developed turbulent channel flow. J. Fluid Mech. 960, A30.CrossRefGoogle Scholar
Suárez, P., Alcantara-Avila, F., Miró, A., Rabault, J., Font, B., Lehmkuhl, O. & Vinuesa, R. 2024 Active flow control for drag reduction through multi-agent reinforcement learning on a turbulent cylinder at Re d = 3900. arXiv preprint arXiv: 2405.17655.CrossRefGoogle Scholar
Tang, H., Rabault, J., Kuhnle, A., Wang, Y. & Wang, T. 2020 Robust active flow control over a range of Reynolds numbers using an artificial neural network trained through deep reinforcement learning. Phys. Fluids 32, 053605.CrossRefGoogle Scholar
Vignon, C., Rabault, J. & Vinuesa, R. 2023 Recent advances in applying deep reinforcement learning for flow control: perspectives and future directions. Phys. Fluids 35, 031301.CrossRefGoogle Scholar
Wang, Q., Yan, L., Hu, G., Chen, W., Rabault, J. & Noack, B.R. 2024 Dynamic feature-based deep reinforcement learning for flow control of circular cylinder with sparse surface pressure sensing. J. Fluid Mech. 988, A4.CrossRefGoogle Scholar
Wang, Y.-Z., Hua, Y., Aubry, N., Chen, Z.-H., Wu, W.-T. & Cui, J. 2022 Accelerating and improving deep reinforcement learning-based active flow control: transfer training of policy network. Phys. Fluids 34, 073609.CrossRefGoogle Scholar
Zong, H., Chiatto, M., Kotsonis, M. & De Luca, L. 2018 Plasma synthetic jet actuators for active flow control. In Actuators, vol. 7, pp. 77. MDPI.Google Scholar
Zong, H. & Kotsonis, M. 2017 Realisation of plasma synthetic jet array with a novel sequential discharge. Sensors Actuators A: Phys. 266, 314317.CrossRefGoogle Scholar
Zong, H. & Kotsonis, M. 2018 Formation, evolution and scaling of plasma synthetic jets. J. Fluid Mech. 837, 147181.CrossRefGoogle Scholar
Zong, H., Wu, Y., Liang, H., Su, Z. & Li, J. 2024 Experimental study on Q-learning control of airfoil trailing-edge flow separation using plasma synthetic jets. Phys. Fluids 36, 015101.CrossRefGoogle Scholar