A Natural Primal-Dual Gradient method for adversarial neural network training on solving partial differential equations
About the Event
Computing high-dimensional partial differential equations (PDEs) remains an important yet challenging problem in computational mathematics. In this work, we propose a scalable deep learning–based algorithm for solving PDEs that integrates preconditioned gradient methods with adversarial training. By multiplying the PDE with a test function, we reformulate the problem as an inf–sup saddle-point problem and apply the Primal–Dual Hybrid Gradient (PDHG) algorithm. To enhance efficiency and stability, we introduce suitable preconditioning operators into the proximal steps of the PDHG algorithm, which leads to a natural gradient ascent–descent scheme for updating neural network parameters. The resulting natural gradients are evaluated efficiently using the Krylov subspace method (MINRES), allowing the inversion of preconditioning matrices to be handled implicitly through matrix–vector products. Convergence guarantees are established for time-continuous formulation of the algorithm when applied to linear equations. We further validate the proposed method on a wide range of linear and nonlinear PDEs with dimensions ranging from 1 to 50. Numerical results demonstrate that our approach is both efficient and robust, exhibiting more stable convergence and higher accuracy than several widely used deep learning–based solvers, including physics-informed neural networks (PINNs), the Deep Ritz method, and weak adversarial networks (WANs). Further developments of the method and its applications to a broader range of scientific computing problems will also be discussed.