• Previous Article
    Comparative study of macroscopic traffic flow models at road junctions
  • NHM Home
  • This Issue
  • Next Article
    A new mixed finite element method for the n-dimensional Boussinesq problem with temperature-dependent viscosity
June  2020, 15(2): 247-259. doi: 10.3934/nhm.2020011

Deep neural network approach to forward-inverse problems

1. 

Department of Mathematics, Pohang University of Science and Technology, South Korea

2. 

Department of Mathematics and Statistics, California State University Long Beach, US

* Corresponding author: Hyung Ju Hwang

Received  January 2020 Revised  April 2020 Published  April 2020

In this paper, we construct approximated solutions of Differential Equations (DEs) using the Deep Neural Network (DNN). Furthermore, we present an architecture that includes the process of finding model parameters through experimental data, the inverse problem. That is, we provide a unified framework of DNN architecture that approximates an analytic solution and its model parameters simultaneously. The architecture consists of a feed forward DNN with non-linear activation functions depending on DEs, automatic differentiation [2], reduction of order, and gradient based optimization method. We also prove theoretically that the proposed DNN solution converges to an analytic solution in a suitable function space for fundamental DEs. Finally, we perform numerical experiments to validate the robustness of our simplistic DNN architecture for 1D transport equation, 2D heat equation, 2D wave equation, and the Lotka-Volterra system.

Citation: Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks & Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011
References:
[1]

W. ArloffK. R. B. Schmitt and L. J. Venstrom, A parameter estimation method for stiff ordinary differential equations using particle swarm optimisation, Int. J. Comput. Sci. Math., 9 (2018), 419-432.  doi: 10.1504/IJCSM.2018.095506.  Google Scholar

[2]

A. G. Baydin, B. A. Pearlmutter, A. A. Radul and J. M. Siskind, Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res., 18 (2017), 43pp.  Google Scholar

[3]

J. Berg and K. Nystr{ö}m, Neural network augmented inverse problems for PDEs, preprint, arXiv: 1712.09685. Google Scholar

[4]

J. Berg and K. Nystr{ö}m, A unified deep artificial neural network approach to partial differential equations in complex geometries, Neurocomputing, 317 (2018), 28-41.  doi: 10.1016/j.neucom.2018.06.056.  Google Scholar

[5]

G. Chavet, Nonlinear Least Squares for Inverse Problems. Theoretical Foundations and Step-By-Step Guide for Applications, Scientific Computation, Springer, New York, 2009. doi: 10.1007/978-90-481-2785-6.  Google Scholar

[6]

N. E. Cotter, The Stone-Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks, 1 (1990), 290-295.  doi: 10.1109/72.80265.  Google Scholar

[7]

R. CourantK. Friedrichs and H. Lewy, On the partial difference equations of mathematical physics, IBM J. Res. Develop., 11 (1967), 215-234.  doi: 10.1147/rd.112.0215.  Google Scholar

[8]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[9]

L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 2010. doi: 10.1090/gsm/019.  Google Scholar

[10]

G. E. Fasshauer, Solving partial differential equations by collocation with radial basis functions, Proceedings of Chamonix, 1997 (1996), 1-8.   Google Scholar

[11]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[12]

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. Google Scholar

[13]

I. E. LagarisA. Likas and D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Networks, 9 (1998), 987-1000.  doi: 10.1109/72.712178.  Google Scholar

[14]

I. E. LagarisA. C. Likas and D. G. Papageorgiou, Neural-network methods for boundary value problems with irregular boundaries, IEEE Trans. Neural Networks, 11 (2000), 1041-1049.  doi: 10.1109/72.870037.  Google Scholar

[15]

K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quart. Appl. Math., 2 (1944), 164-168.  doi: 10.1090/qam/10666.  Google Scholar

[16]

L. JianyuL. SiweiQ. Yingjian and H. Yaping, Numerical solution of elliptic partial differential equation using radial basis function neural networks, Neural Networks, 16 (2003), 729-734.  doi: 10.1016/S0893-6080(03)00083-2.  Google Scholar

[17]

J. Li and X. Li, Particle swarm optimization iterative identification algorithm and gradient iterative identification algorithm for Wiener systems with colored noise, Complexity, 2018 (2018), 8pp. doi: 10.1155/2018/7353171.  Google Scholar

[18]

X. Li, Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer, Neurocomputing, 12 (1996), 327-343.  doi: 10.1016/0925-2312(95)00070-4.  Google Scholar

[19]

D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Indust. Appl. Math., 11 (1963), 431-441.  doi: 10.1137/0111030.  Google Scholar

[20]

W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys., 5 (1943), 115-133.  doi: 10.1007/BF02478259.  Google Scholar

[21]

A. Paszke, et al., Automatic differentiation in PyTorch, Computer Science, (2017). Google Scholar

[22]

M. RaissiP. Perdikaris and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686-707.  doi: 10.1016/j.jcp.2018.10.045.  Google Scholar

[23]

S. J. Reddi, S. Kale and S. Kumar, On the convergence of ADAM and beyond, preprint, arXiv: 1904.09237. Google Scholar

[24]

S. A. Sarra, Adaptive radial basis function methods for time dependent partial differential equations, Appl. Numer. Math., 54 (2005), 79-94.  doi: 10.1016/j.apnum.2004.07.004.  Google Scholar

[25]

P. Tsilifis, I. Bilionis, I. Katsounaros and N. Zabaras, Computationally efficient variational approximations for Bayesian inverse problems, J. Verif. Valid. Uncert., 1 (2016), 13pp. doi: 10.1115/1.4034102.  Google Scholar

[26]

F. Yaman, V. G. Yakhno and R. Potthast, A survey on inverse problems for applied sciences, Math. Probl. Eng., 2013 (2013), 19pp. doi: 10.1155/2013/976837.  Google Scholar

show all references

References:
[1]

W. ArloffK. R. B. Schmitt and L. J. Venstrom, A parameter estimation method for stiff ordinary differential equations using particle swarm optimisation, Int. J. Comput. Sci. Math., 9 (2018), 419-432.  doi: 10.1504/IJCSM.2018.095506.  Google Scholar

[2]

A. G. Baydin, B. A. Pearlmutter, A. A. Radul and J. M. Siskind, Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res., 18 (2017), 43pp.  Google Scholar

[3]

J. Berg and K. Nystr{ö}m, Neural network augmented inverse problems for PDEs, preprint, arXiv: 1712.09685. Google Scholar

[4]

J. Berg and K. Nystr{ö}m, A unified deep artificial neural network approach to partial differential equations in complex geometries, Neurocomputing, 317 (2018), 28-41.  doi: 10.1016/j.neucom.2018.06.056.  Google Scholar

[5]

G. Chavet, Nonlinear Least Squares for Inverse Problems. Theoretical Foundations and Step-By-Step Guide for Applications, Scientific Computation, Springer, New York, 2009. doi: 10.1007/978-90-481-2785-6.  Google Scholar

[6]

N. E. Cotter, The Stone-Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks, 1 (1990), 290-295.  doi: 10.1109/72.80265.  Google Scholar

[7]

R. CourantK. Friedrichs and H. Lewy, On the partial difference equations of mathematical physics, IBM J. Res. Develop., 11 (1967), 215-234.  doi: 10.1147/rd.112.0215.  Google Scholar

[8]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[9]

L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 2010. doi: 10.1090/gsm/019.  Google Scholar

[10]

G. E. Fasshauer, Solving partial differential equations by collocation with radial basis functions, Proceedings of Chamonix, 1997 (1996), 1-8.   Google Scholar

[11]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[12]

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. Google Scholar

[13]

I. E. LagarisA. Likas and D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Networks, 9 (1998), 987-1000.  doi: 10.1109/72.712178.  Google Scholar

[14]

I. E. LagarisA. C. Likas and D. G. Papageorgiou, Neural-network methods for boundary value problems with irregular boundaries, IEEE Trans. Neural Networks, 11 (2000), 1041-1049.  doi: 10.1109/72.870037.  Google Scholar

[15]

K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quart. Appl. Math., 2 (1944), 164-168.  doi: 10.1090/qam/10666.  Google Scholar

[16]

L. JianyuL. SiweiQ. Yingjian and H. Yaping, Numerical solution of elliptic partial differential equation using radial basis function neural networks, Neural Networks, 16 (2003), 729-734.  doi: 10.1016/S0893-6080(03)00083-2.  Google Scholar

[17]

J. Li and X. Li, Particle swarm optimization iterative identification algorithm and gradient iterative identification algorithm for Wiener systems with colored noise, Complexity, 2018 (2018), 8pp. doi: 10.1155/2018/7353171.  Google Scholar

[18]

X. Li, Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer, Neurocomputing, 12 (1996), 327-343.  doi: 10.1016/0925-2312(95)00070-4.  Google Scholar

[19]

D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Indust. Appl. Math., 11 (1963), 431-441.  doi: 10.1137/0111030.  Google Scholar

[20]

W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys., 5 (1943), 115-133.  doi: 10.1007/BF02478259.  Google Scholar

[21]

A. Paszke, et al., Automatic differentiation in PyTorch, Computer Science, (2017). Google Scholar

[22]

M. RaissiP. Perdikaris and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686-707.  doi: 10.1016/j.jcp.2018.10.045.  Google Scholar

[23]

S. J. Reddi, S. Kale and S. Kumar, On the convergence of ADAM and beyond, preprint, arXiv: 1904.09237. Google Scholar

[24]

S. A. Sarra, Adaptive radial basis function methods for time dependent partial differential equations, Appl. Numer. Math., 54 (2005), 79-94.  doi: 10.1016/j.apnum.2004.07.004.  Google Scholar

[25]

P. Tsilifis, I. Bilionis, I. Katsounaros and N. Zabaras, Computationally efficient variational approximations for Bayesian inverse problems, J. Verif. Valid. Uncert., 1 (2016), 13pp. doi: 10.1115/1.4034102.  Google Scholar

[26]

F. Yaman, V. G. Yakhno and R. Potthast, A survey on inverse problems for applied sciences, Math. Probl. Eng., 2013 (2013), 19pp. doi: 10.1155/2013/976837.  Google Scholar

Figure 1.  Network architecture
Figure 2.  Experimental result for 1D transport equation
Figure 3.  Experimental result for 2D heat equation with $ u(0,x,y) = x(1-x)y(1-y) $
Figure 4.  Experimental result for 2D heat equation with $ u(0,x,y) = 1 \text{, if } (x,y) \in \Omega, 0 \text{, otherwise} $
Figure 5.  Experimental result for 2D wave equation
Figure 6.  Experimental result for Lotka-Volterra equation
Figure 7.  Experimental result for CFL condition
Algorithm 1: Training
1: procedure train(number of epochs)
2:   Initialize the nerural network.
3:   For number of epochs do
4:     sample $ z^1, z^2,..., z^m $ from uniform distribution over $ \Omega $
5:     sample $ z_I^1, z_I^2,..., z_I^m $ from uniform distribution over $ \{0\} \times\Omega $
6:     sample $ z_B^1, z_B^2,..., z_B^m $ from uniform distribution over $ \partial\Omega $
7:     sample k observation points $ z_O^1, z_O^2,..., z_O^k $
8:     Find the true value $ u_j = u_p(z_O^j) $ for $ j=1,2,...,k $
9:     Update the neural network by descending its stochastic gradient :
$\begin{equation} \nonumber \nabla_{w, b} [\frac{1}{m} \sum\limits_{i = 1}^m [L_p(u_N)(z^i)^2 + (u_N(z_I^i)-f(z_I^i))^2 + (u_N(z_B^i)-g(z_B^i))^2] + \frac{1}{k}\sum\limits_{j = 1}^k (u_N(z_O^j)-u_j)^2] \end{equation}$
10:   end for
11: end procedure
Algorithm 1: Training
1: procedure train(number of epochs)
2:   Initialize the nerural network.
3:   For number of epochs do
4:     sample $ z^1, z^2,..., z^m $ from uniform distribution over $ \Omega $
5:     sample $ z_I^1, z_I^2,..., z_I^m $ from uniform distribution over $ \{0\} \times\Omega $
6:     sample $ z_B^1, z_B^2,..., z_B^m $ from uniform distribution over $ \partial\Omega $
7:     sample k observation points $ z_O^1, z_O^2,..., z_O^k $
8:     Find the true value $ u_j = u_p(z_O^j) $ for $ j=1,2,...,k $
9:     Update the neural network by descending its stochastic gradient :
$\begin{equation} \nonumber \nabla_{w, b} [\frac{1}{m} \sum\limits_{i = 1}^m [L_p(u_N)(z^i)^2 + (u_N(z_I^i)-f(z_I^i))^2 + (u_N(z_B^i)-g(z_B^i))^2] + \frac{1}{k}\sum\limits_{j = 1}^k (u_N(z_O^j)-u_j)^2] \end{equation}$
10:   end for
11: end procedure
Table 1.  Information of grid and observation points
Data Generation
Grid Range Number of Grid Points Number of Observations
1D Transport $ (t,x) \in [0,1]\times[0,1] $ $ 17 \times 100 $ 17
2D Heat $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 13
2D Wave $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 61
Lotka-Volterra $ t \in [0,100] $ 20,000 40
Data Generation
Grid Range Number of Grid Points Number of Observations
1D Transport $ (t,x) \in [0,1]\times[0,1] $ $ 17 \times 100 $ 17
2D Heat $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 13
2D Wave $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 61
Lotka-Volterra $ t \in [0,100] $ 20,000 40
Table 2.  Neural network architecture
Neural Network Architecture
Fully Connected Layers Activation Functions Learning Rate
1D Transport 2(input)-128-256-128-1(output) ReLU $ 10^{-5} $
2D Heat 3(input)-128-128-1(output) Sin, Sigmoid $ 10^{-5} $
2D Wave 3(input)-128-256-128-1(output) Sin, Tanh $ 10^{-5} $
Lotka-Volterra 1(input)-64-64-2(output) Sin $ 10^{-4} $
Neural Network Architecture
Fully Connected Layers Activation Functions Learning Rate
1D Transport 2(input)-128-256-128-1(output) ReLU $ 10^{-5} $
2D Heat 3(input)-128-128-1(output) Sin, Sigmoid $ 10^{-5} $
2D Wave 3(input)-128-256-128-1(output) Sin, Tanh $ 10^{-5} $
Lotka-Volterra 1(input)-64-64-2(output) Sin $ 10^{-4} $
[1]

Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020  doi: 10.3934/jcd.2021006

[2]

Bingyan Liu, Xiongbing Ye, Xianzhou Dong, Lei Ni. Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2021016

[3]

Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392

[4]

Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2021001

[5]

Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019

[6]

Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367

[7]

Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021005

[8]

Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045

[9]

Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002

[10]

Kien Trung Nguyen, Vo Nguyen Minh Hieu, Van Huy Pham. Inverse group 1-median problem on trees. Journal of Industrial & Management Optimization, 2021, 17 (1) : 221-232. doi: 10.3934/jimo.2019108

[11]

Jianli Xiang, Guozheng Yan. The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021004

[12]

Shahede Omidi, Jafar Fathali. Inverse single facility location problem on a tree with balancing on the distance of server to clients. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2021017

[13]

Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021006

[14]

Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020440

[15]

Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020317

[16]

Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326

[17]

Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391

[18]

Weihong Guo, Yifei Lou, Jing Qin, Ming Yan. IPI special issue on "mathematical/statistical approaches in data science" in the Inverse Problem and Imaging. Inverse Problems & Imaging, 2021, 15 (1) : I-I. doi: 10.3934/ipi.2021007

[19]

Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020384

[20]

Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020348

2019 Impact Factor: 1.053

Metrics

  • PDF downloads (295)
  • HTML views (185)
  • Cited by (0)

[Back to Top]