Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-23T16:39:02.605Z Has data issue: false hasContentIssue false

Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning

Published online by Cambridge University Press:  04 September 2024

Tim De Ryck
Affiliation:
Seminar for Applied Mathematics, ETH Zürich, Rämistrasse 101, 8092 Zürich, Switzerland E-mail: [email protected]
Siddhartha Mishra
Affiliation:
Seminar for Applied Mathematics & ETH AI Center, ETH Zürich, Rämistrasse 101, 8092 Zürich, Switzerland E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Physics-informed neural networks (PINNs) and their variants have been very popular in recent years as algorithms for the numerical simulation of both forward and inverse problems for partial differential equations. This article aims to provide a comprehensive review of currently available results on the numerical analysis of PINNs and related models that constitute the backbone of physics-informed machine learning. We provide a unified framework in which analysis of the various components of the error incurred by PINNs in approximating PDEs can be effectively carried out. We present a detailed review of available results on approximation, generalization and training errors and their behaviour with respect to the type of the PDE and the dimension of the underlying domain. In particular, we elucidate the role of the regularity of the solutions and their stability to perturbations in the error analysis. Numerical results are also presented to illustrate the theory. We identify training errors as a key bottleneck which can adversely affect the overall performance of various models in physics-informed machine learning.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

References

Alessandrini, G., Rondi, L., Rosset, E. and Vessella, S. (2009), The stability of the Cauchy problem for elliptic equations, Inverse Problems 25, art. 123004.10.1088/0266-5611/25/12/123004CrossRefGoogle Scholar
Allen, S. M. and Cahn, J. W. (1979), A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening, Acta Metallurg. 27, 10851095.10.1016/0001-6160(79)90196-2CrossRefGoogle Scholar
Bach, F. (2017), Breaking the curse of dimensionality with convex neural networks, J. Mach. Learn. Res. 18, 629681.Google Scholar
Barron, A. R. (1993), Universal approximation bounds for superpositions of a sigmoidal function, IEEE Trans. Inform. Theory 39, 930945.10.1109/18.256500CrossRefGoogle Scholar
Bartels, S. (2012), Total variation minimization with finite elements: Convergence and iterative solution, SIAM J. Numer. Anal. 50, 11621180.10.1137/11083277XCrossRefGoogle Scholar
Bartolucci, F., de Bézenac, E., Raonić, B., Molinaro, R., Mishra, S. and Alaifari, R. (2023), Representation equivalent neural operators: A framework for alias-free operator learning, in Advances in Neural Information Processing Systems 36 (Oh, A. et al., eds), Curran Associates, pp. 6966169672.Google Scholar
Beck, C., Becker, S., Grohs, P., Jaafari, N. and Jentzen, A. (2021), Solving the Kolmogorov PDE by means of deep learning, J. Sci. Comput. 88, 128.10.1007/s10915-021-01590-0CrossRefGoogle Scholar
Beck, C., Jentzen, A. and Kuckuck, B. (2022), Full error analysis for the training of deep neural networks, Infin . Dimens. Anal. Quantum Probab. Relat. Top. 25, art. 2150020.10.1142/S021902572150020XCrossRefGoogle Scholar
Berner, J., Grohs, P. and Jentzen, A. (2020), Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations, SIAM J. Math. Data Sci. 2, 631657.10.1137/19M125649XCrossRefGoogle Scholar
Bochev, P. B. and Gunzburger, M. D. (2009), Least-Squares Finite Element Methods, Vol. 166 of Applied Mathematical Sciences, Springer.Google Scholar
Buhmann, M. D. (2000), Radial basis functions, Acta Numer. 9, 138.10.1017/S0962492900000015CrossRefGoogle Scholar
Burman, E. and Hansbo, P. (2018), Stabilized nonconfirming finite element methods for data assimilation in incompressible flows, Math. Comp. 87, 10291050.10.1090/mcom/3255CrossRefGoogle Scholar
Chen, T. and Chen, H. (1995), Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems, IEEE Trans. Neural Netw. 6, 911917.10.1109/72.392253CrossRefGoogle ScholarPubMed
Chen, Z., Lu, J. and Lu, Y. (2021), On the representation of solutions to elliptic PDEs in Barron spaces, in Advances in Neural Information Processing Systems 34 (Ranzato, M. et al., eds), Curran Associates, pp. 64546465.Google Scholar
Chen, Z., Lu, J., Lu, Y. and Zhou, S. (2023), A regularity theory for static Schrödinger equations on ℝd in spectral Barron spaces, SIAM J. Math. Anal. 55, 557570.10.1137/22M1478719CrossRefGoogle Scholar
Cuomo, S., di Cola, V. S., Giampaolo, F., Rozza, G., Raissi, M. and Piccialli, F. (2022), Scientific machine learning through physics-informed neural networks: Where we are and what’s next, J. Sci. Comput. 92, art. 88.10.1007/s10915-022-01939-zCrossRefGoogle Scholar
Cybenko, G. (1989), Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems 2, 303314.10.1007/BF02551274CrossRefGoogle Scholar
Dautray, R. and Lions, J. L. (1992), Mathematical Analysis and Numerical Methods for Science and Technology, Vol. 5, Evolution Equations I, Springer.Google Scholar
De Ryck, T. and Mishra, S. (2022a), Error analysis for physics-informed neural networks (PINNs) approximating Kolmogorov PDEs, Adv. Comput. Math. 48, art. 79.10.1007/s10444-022-09985-9CrossRefGoogle Scholar
De Ryck, T. and Mishra, S. (2022b), Generic bounds on the approximation error for physics-informed (and) operator learning, in Advances in Neural Information Processing Systems 35 (Koyejo, S. et al., eds), Curran Associates, pp. 1094510958.Google Scholar
De Ryck, T. and Mishra, S. (2023), Error analysis for deep neural network approximations of parametric hyperbolic conservation laws, Math. Comp. Available at https://doi.org/10.1090/mcom/3934.CrossRefGoogle Scholar
De Ryck, T., Bonnet, F., Mishra, S. and de Bézenac, E. (2023), An operator preconditioning perspective on training in physics-informed machine learning. Available at arXiv:2310.05801.Google Scholar
De Ryck, T., Jagtap, A. D. and Mishra, S. (2024a), Error estimates for physics-informed neural networks approximating the Navier–Stokes equations, IMA J. Numer. Anal. 44, 83119.10.1093/imanum/drac085CrossRefGoogle Scholar
De Ryck, T., Jagtap, A. D. and Mishra, S. (2024b), On the stability of XPINNs and cPINNs. In preparation.Google Scholar
De Ryck, T., Lanthaler, S. and Mishra, S. (2021), On the approximation of functions by tanh neural networks, Neural Networks 143, 732750.10.1016/j.neunet.2021.08.015CrossRefGoogle ScholarPubMed
De Ryck, T., Mishra, S. and Molinaro, R. (2024c), wPINNs: Weak physics informed neural networks for approximating entropy solutions of hyperbolic conservation laws, SIAM J. Numer. Anal. 62, 811841.10.1137/22M1522504CrossRefGoogle Scholar
Dissanayake, M. W. M. G. and Phan-Thien, N. (1994), Neural-network-based approximations for solving partial differential equations, Commun . Numer. Methods Engrg 10, 195201.10.1002/cnm.1640100303CrossRefGoogle Scholar
Dolean, V., Heinlein, A., Mishra, S. and Moseley, B. (2023), Multilevel domain decomposition-based architectures for physics-informed neural networks. Available at arXiv:2306.05486.Google Scholar
Dolean, V., Jolivet, P. and Nataf, F. (2015), An Introduction to Domain Decomposition Methods: Algorithms, Theory, and Parallel Implementation, SIAM.10.1137/1.9781611974065CrossRefGoogle Scholar
Dong, S. and Ni, N. (2021), A method for representing periodic functions and enforcing exactly periodic boundary conditions with deep neural networks, J. Comput. Phys. 435, art. 110242.10.1016/j.jcp.2021.110242CrossRefGoogle Scholar
E, W. and Wojtowytsch, S. (2020), On the Banach spaces associated with multi-layer ReLU networks: Function representation, approximation theory and gradient descent dynamics. Available at arXiv:2007.15623.Google Scholar
W, E and Wojtowytsch, S. (2022a), Representation formulas and pointwise properties for Barron functions, Calc . Var. Partial Differential Equations 61, 137.Google Scholar
E, W. and Wojtowytsch, S. (2022b), Some observations on high-dimensional partial differential equations with Barron data, in Proceedings of the Second Conference on Mathematical and Scientific Machine Learning, Vol. 145 of Proceedings of Machine Learning Research, PMLR, pp. 253269.Google Scholar
E, W. and Yu, B. (2018), The deep Ritz method: A deep learning-based numerical algorithm for solving variational problems, Commun . Math. Statist. 6, 112.Google Scholar
E, W., Ma, C. and Wu, L. (2022), The Barron space and the flow-induced function spaces for neural network models, Constr. Approx. 55, 369406.10.1007/s00365-021-09549-yCrossRefGoogle Scholar
Evans, L. C. (2022), Partial Differential Equations, Vol. 19 of Graduate Studies in Mathematics, American Mathematical Society.Google Scholar
Friedman, A. (1964), Partial Differential Equations of the Parabolic Type, Prentice Hall.Google Scholar
Glorot, X. and Bengio, Y. (2010), Understanding the difficulty of training deep feedforward neural networks, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Vol. 9 of Proceedings of Machine Learning Research, PMLR, pp. 249256.Google Scholar
Godlewski, E. and Raviart, P. A. (1991), Hyperbolic Systems of Conservation Laws, Ellipsis.Google Scholar
Goodfellow, I., Bengio, Y. and Courville, A. (2016), Deep Learning, MIT Press.Google Scholar
Grohs, P., Hornung, F., Jentzen, A. and von Wurstemberger, P. (2018), A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations. Available at arXiv:1809.02362.Google Scholar
Gühring, I. and Raslan, M. (2021), Approximation rates for neural networks with encodable weights in smoothness spaces, Neural Networks 134, 107130.10.1016/j.neunet.2020.11.010CrossRefGoogle ScholarPubMed
Han, J., Jentzen, A. and E, W. (2018), Solving high-dimensional partial differential equations using deep learning, Proc . Nat. Acad. Sci. 115, 85058510.10.1073/pnas.1718942115CrossRefGoogle Scholar
Hesthaven, J. S., Gottlieb, S. and Gottlieb, D. (2007), Spectral Methods for Time-Dependent Problems, Vol. 21 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press.10.1017/CBO9780511618352CrossRefGoogle Scholar
Holden, H. and Risebro, N. H. (2015), Front Tracking for Hyperbolic Conservation Laws, Vol. 152 of Applied Mathematical Sciences, Springer.10.1007/978-3-662-47507-2CrossRefGoogle Scholar
Hong, Q., Siegel, J. W. and Xu, J. (2021), A priori analysis of stable neural network solutions to numerical PDEs. Available at arXiv:2104.02903.Google Scholar
Huang, G.-B., Zhu, Q.-Y. and Siew, C.-K. (2006), Extreme learning machine: Theory and applications, Neurocomput. 70, 489501.10.1016/j.neucom.2005.12.126CrossRefGoogle Scholar
Hutzenthaler, M., Jentzen, A., Kruse, T. and Nguyen, T. A. (2020), A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations, SN Partial Differ. .Equ. Appl 1, 134.10.1007/s42985-019-0006-9CrossRefGoogle Scholar
Imanuvilov, O. Y. (1995), Controllability of parabolic equations, Sb. .Math 186, 109132.Google Scholar
Jagtap, A. D. and Karniadakis, G. E. (2020), Extended physics-informed neural networks (XPINNs): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations, Commun. Comput. Phys. 28, 20022041.Google Scholar
Jagtap, A. D., Kharazmi, E. and Karniadakis, G. E. (2020), Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems, Comput. Methods Appl. Mech. Engrg 365, art. 113028.10.1016/j.cma.2020.113028CrossRefGoogle Scholar
Jentzen, A., Salimova, D. and Welti, T. (2021), A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients, Commun. .Math. Sci 19, 11671205.10.4310/CMS.2021.v19.n5.a1CrossRefGoogle Scholar
Karniadakis, G. E., Kevrekidis, I. G., Lu, L., Perdikaris, P., Wang, S. and Yang, L. (2021), Physics informed machine learning, Nat. Rev. Phys. 3, 422440.10.1038/s42254-021-00314-5CrossRefGoogle Scholar
Kharazmi, E., Zhang, Z. and Karniadakis, G. E. (2019), Variational physics-informed neural networks for solving partial differential equations. Available at arXiv:1912.00873.Google Scholar
Kharazmi, E., Zhang, Z. and Karniadakis, G. E. (2021), hp-VPINNs: Variational physics-informed neural networks with domain decomposition, Comput. Methods Appl. Mech. Engrg 374, art. 113547.10.1016/j.cma.2020.113547CrossRefGoogle Scholar
Kingma, D. P. and Ba, J. (2015), Adam: A method for stochastic optimization, in Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015).Google Scholar
Kovachki, N. B., Li, Z., Liu, B., Azizzadenesheli, K., Bhattacharya, K., Stuart, A. M. and Anandkumar, A. (2023), Neural operator: Learning maps between function spaces with applications to PDEs, J. Mach. Learn. Res. 24, 197.Google Scholar
Kovachki, N., Lanthaler, S. and Mishra, S. (2021), On universal approximation and error bounds for Fourier neural operators, J. Mach. Learn. Res. 22, 1323713312.Google Scholar
Krishnapriyan, A. S., Gholami, A., Zhe, S., Kirby, R. M. and Mahoney, M. W. (2021), Characterizing possible failure modes in physics-informed neural networks, in Advances in Neural Information Processing Systems 34 (Ranzato, M. et al., eds), Curran Associates, pp. 2654826560.Google Scholar
Kutyniok, G., Petersen, P., Raslan, M. and Schneider, R. (2022), A theoretical analysis of deep neural networks and parametric PDEs, Constr. Approx. 55, 73125.10.1007/s00365-021-09551-4CrossRefGoogle Scholar
Lagaris, I. E., Likas, A. and Fotiadis, D. I. (1998), Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Netw. 9, 9871000.10.1109/72.712178CrossRefGoogle ScholarPubMed
Lagaris, I. E., Likas, A. and Papageorgiou, G. D. (2000), Neural-network methods for boundary value problems with irregular boundaries, IEEE Trans. Neural Netw. 11, 10411049.10.1109/72.870037CrossRefGoogle ScholarPubMed
Lanthaler, S., Mishra, S. and Karniadakis, G. E. (2022), Error estimates for DeepONets: A deep learning framework in infinite dimensions, Trans. Math. Appl. 6, tnac001.Google Scholar
LeVeque, R. J. (2002), Finite Volume Methods for Hyperbolic Problems, Vol. 31 of Cambridge Texts in Applied Mathematics, Cambridge University Press.10.1017/CBO9780511791253CrossRefGoogle Scholar
Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A. and Anandkumar, A. (2020), Neural operator: Graph kernel network for partial differential equations. Available at arXiv:2003.03485.Google Scholar
Li, Z., Kovachki, N. B., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A. and Anandkumar, A. (2021), Fourier neural operator for parametric partial differential equations, in International Conference on Learning Representations (ICLR 2021).Google Scholar
Lin, C.-L., Uhlmann, G. and Wang, J.-N. (2010), Optimal three-ball inequalities and quantitative uniqueness for the Stokes system, Discrete Contin . Dyn. Syst. 28, 12731290.Google Scholar
Liu, C., Zhu, L. and Belkin, M. (2020), On the linearity of large non-linear models: When and why the tangent kernel is constant, in Advances in Neural Information Processing Systems 33 (Larochelle, H. et al., eds), Curran Associates, pp. 1595415964.Google Scholar
Liu, D. C. and Nocedal, J. (1989), On the limited memory BFGS method for large scale optimization, Math. Program. 45, 503528.10.1007/BF01589116CrossRefGoogle Scholar
Longo, M., Mishra, S., Rusch, T. K. and Schwab, C. (2021), Higher-order quasi-Monte Carlo training of deep neural networks, SIAM J. Sci. Comput. 43, A3938A3966.10.1137/20M1369373CrossRefGoogle Scholar
Lu, L., Jin, P., Pang, G., Zhang, Z. and Karniadakis, G. E. (2021a), Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators, Nat. Mach. Intell. 3, 218229.10.1038/s42256-021-00302-5CrossRefGoogle Scholar
Lu, Y., Chen, H., Lu, J., Ying, L. and Blanchet, J. (2021b), Machine learning for elliptic PDEs: Fast rate generalization bound, neural scaling law and minimax optimality, in International Conference on Learning Representations (ICLR 2022).Google Scholar
Lu, Y., Lu, J. and Wang, M. (2021c), A priori generalization analysis of the deep Ritz method for solving high dimensional elliptic partial differential equations, in Proceedings of Thirty Fourth Conference on Learning Theory, Vol. 134 of Proceedings of Machine Learning Research, PMLR, pp. 31963241.Google Scholar
Lye, K. O., Mishra, S. and Ray, D. (2020), Deep learning observables in computational fluid dynamics, J. Comput. Phys. 410, art. 109339.10.1016/j.jcp.2020.109339CrossRefGoogle Scholar
Majda, A. J. and Bertozzi, A. L. (2002), Vorticity and Incompressible Flow, Vol. 55 of Cambridge Texts in Applied Mathematics, Cambridge University Press.Google Scholar
Marwah, T., Lipton, Z. and Risteski, A. (2021), Parametric complexity bounds for approximating PDEs with neural networks, in Advances in Neural Information Processing Systems 34 (Ranzato, M. et al., eds), Curran Associates, pp. 1504415055.Google Scholar
Mishra, S. and Molinaro, R. (2021), Physics informed neural networks for simulating radiative transfer, J. Quant. Spectrosc. Radiative Transfer 270, art. 107705.10.1016/j.jqsrt.2021.107705CrossRefGoogle Scholar
Mishra, S. and Molinaro, R. (2022), Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs, IMA J. Numer. Anal. 42, 9811022.10.1093/imanum/drab032CrossRefGoogle Scholar
Mishra, S. and Molinaro, R. (2023), Estimates on the generalization error of physics-informed neural networks for approximating PDEs, IMA J. Numer. Anal. 43, 143.10.1093/imanum/drab093CrossRefGoogle Scholar
Modest, M. F. (2003), Radiative Heat Transfer, Elsevier.10.1016/B978-012503163-9/50023-0CrossRefGoogle Scholar
Molinaro, R. (2023), Applications of deep learning to scientific computing. PhD thesis, ETH Zürich.Google Scholar
Moseley, B., Markham, A. and Nissen-Meyer, T. (2023), Finite basis physics-informed neural networks (FBPINNs): A scalable domain decomposition approach for solving differential equations, Adv. Comput. Math. 49, art. 62.10.1007/s10444-023-10065-9CrossRefGoogle Scholar
Müller, J. and Zeinhofer, M. (2023), Achieving high accuracy with PINNs via energy natural gradient descent, in Proceedings of the 40th International Conference on Machine Learning (Krause, A. et al., eds), Vol. 202 of Proceedings of Machine Learning Research, PMLR, pp. 2547125485.Google Scholar
Opschoor, J. A. A., Petersen, P. C. and Schwab, C. (2020), Deep ReLU networks and high-order finite element methods, Anal. Appl. 18, 715770.10.1142/S0219530519410136CrossRefGoogle Scholar
Petersen, P. and Voigtlaender, F. (2018), Optimal approximation of piecewise smooth functions using deep ReLU neural networks, Neural Networks 108, 296330.10.1016/j.neunet.2018.08.019CrossRefGoogle ScholarPubMed
Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F. A., Bengio, Y. and Courville, A. (2019), On the spectral bias of neural networks, in Proceedings of the 36th International Conference on Machine Learning (ICML 2019), Vol. 97 of Proceedings of Machine Learning Research, PMLR, pp. 53015310.Google Scholar
Rahimi, A. and Recht, B. (2008), Uniform approximation of functions with random bases, in 2008 46th Annual Allerton Conference on Communication, Control, and Computing, IEEE, pp. 555561.10.1109/ALLERTON.2008.4797607CrossRefGoogle Scholar
Raissi, M. and Karniadakis, G. E. (2018), Hidden physics models: Machine learning of nonlinear partial differential equations, J. Comput. Phys. 357, 125141.10.1016/j.jcp.2017.11.039CrossRefGoogle Scholar
Raissi, M., Perdikaris, P. and Karniadakis, G. E. (2019), Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys. 378, 686707.10.1016/j.jcp.2018.10.045CrossRefGoogle Scholar
Raissi, M., Yazdani, A. and Karniadakis, G. E. (2018), Hidden fluid mechanics: A Navier–Stokes informed deep learning framework for assimilating flow visualization data. Available at arXiv:1808.04327.Google Scholar
Raonić, B., Molinaro, R., Ryck, T. D., Rohner, T., Bartolucci, F., Alaifari, R., Mishra, S. and de Bézenac, E. (2023), Convolutional neural operators for robust and accurate learning of PDEs, in Advances in Neural Information Processing Systems 36 (Oh, A. et al., eds), Curran Associates, pp. 7718777200.Google Scholar
Rasmussen, C. E. (2003), Gaussian processes in machine learning, in Summer School on Machine Learning, Springer, pp. 6371.Google Scholar
Schwab, C. and Zech, J. (2019), Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in uq, Anal. Appl. 17, 1955.10.1142/S0219530518500203CrossRefGoogle Scholar
Shin, Y. (2020), On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs, Commun. .Comput. Phys 28, 20422074.10.4208/cicp.OA-2020-0193CrossRefGoogle Scholar
Shin, Y., Zhang, Z. and Karniadakis, G. E. (2023), Error estimates of residual minimization using neural networks for linear PDEs, J. Mach. Learn. Model. Comput. 4, 73101.10.1615/JMachLearnModelComput.2023050411CrossRefGoogle Scholar
Stoer, J. and Bulirsch, R. (2002), Introduction to Numerical Analysis, Springer.10.1007/978-0-387-21738-3CrossRefGoogle Scholar
Tancik, M., Srinivasan, P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J. and Ng, R. (2020), Fourier features let networks learn high frequency functions in low dimensional domains, in Advances in Neural Information Processing Systems 33 (Larochelle, H. et al., eds), Curran Associates, pp. 75377547.Google Scholar
Temam, R. (2001), Navier–Stokes Equations: Theory and Numerical Analysis, American Mathematical Society.Google Scholar
Trefethen, N. and Embree, M. (2005), Spectra and Pseudo-Spectra, Princeton University Press.10.1515/9780691213101CrossRefGoogle Scholar
Verfürth, R. (1999), A note on polynomial approximation in Sobolev spaces, ESAIM Math. Model. Numer. Anal. 33, 715719.10.1051/m2an:1999159CrossRefGoogle Scholar
Wang, C., Li, S., He, D. and Wang, L. (2022a), Is L 2 physics informed loss always suitable for training physics informed neural network?, in Advances in Neural Information Processing Systems 35 (Koyejo, S. et al., eds), Curran Associates, pp. 82788290.Google Scholar
Wang, S., Sankaran, S. and Perdikaris, P. (2024), Respecting causality for training physics-informed neural networks, Comput. Methods Appl. Mech. Engrg 421, art. 116813.10.1016/j.cma.2024.116813CrossRefGoogle Scholar
Wang, S., Teng, Y. and Perdikaris, P. (2021a), Understanding and mitigating gradient flow pathologies in physics-informed neural networks, SIAM J. Sci. Comput. 43, A3055A3081.10.1137/20M1318043CrossRefGoogle Scholar
Wang, S., Wang, H. and Perdikaris, P. (2021b), Learning the solution operator of parametric partial differential equations with physics-informed DeepONets, Sci . Adv. 7, eabi8605.Google Scholar
Wang, S., Yu, X. and Perdikaris, P. (2022b), When and why PINNs fail to train: A neural tangent kernel perspective, J. Comput. Phys. 449, art. 110768.10.1016/j.jcp.2021.110768CrossRefGoogle Scholar