• Previous Article
    Comparative study of macroscopic traffic flow models at road junctions
  • NHM Home
  • This Issue
  • Next Article
    A new mixed finite element method for the n-dimensional Boussinesq problem with temperature-dependent viscosity
June  2020, 15(2): 247-259. doi: 10.3934/nhm.2020011

Deep neural network approach to forward-inverse problems

1. 

Department of Mathematics, Pohang University of Science and Technology, South Korea

2. 

Department of Mathematics and Statistics, California State University Long Beach, US

* Corresponding author: Hyung Ju Hwang

Received  January 2020 Revised  April 2020 Published  April 2020

In this paper, we construct approximated solutions of Differential Equations (DEs) using the Deep Neural Network (DNN). Furthermore, we present an architecture that includes the process of finding model parameters through experimental data, the inverse problem. That is, we provide a unified framework of DNN architecture that approximates an analytic solution and its model parameters simultaneously. The architecture consists of a feed forward DNN with non-linear activation functions depending on DEs, automatic differentiation [2], reduction of order, and gradient based optimization method. We also prove theoretically that the proposed DNN solution converges to an analytic solution in a suitable function space for fundamental DEs. Finally, we perform numerical experiments to validate the robustness of our simplistic DNN architecture for 1D transport equation, 2D heat equation, 2D wave equation, and the Lotka-Volterra system.

Citation: Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks & Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011
References:
[1]

W. ArloffK. R. B. Schmitt and L. J. Venstrom, A parameter estimation method for stiff ordinary differential equations using particle swarm optimisation, Int. J. Comput. Sci. Math., 9 (2018), 419-432.  doi: 10.1504/IJCSM.2018.095506.  Google Scholar

[2]

A. G. Baydin, B. A. Pearlmutter, A. A. Radul and J. M. Siskind, Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res., 18 (2017), 43pp.  Google Scholar

[3]

J. Berg and K. Nystr{ö}m, Neural network augmented inverse problems for PDEs, preprint, arXiv: 1712.09685. Google Scholar

[4]

J. Berg and K. Nystr{ö}m, A unified deep artificial neural network approach to partial differential equations in complex geometries, Neurocomputing, 317 (2018), 28-41.  doi: 10.1016/j.neucom.2018.06.056.  Google Scholar

[5]

G. Chavet, Nonlinear Least Squares for Inverse Problems. Theoretical Foundations and Step-By-Step Guide for Applications, Scientific Computation, Springer, New York, 2009. doi: 10.1007/978-90-481-2785-6.  Google Scholar

[6]

N. E. Cotter, The Stone-Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks, 1 (1990), 290-295.  doi: 10.1109/72.80265.  Google Scholar

[7]

R. CourantK. Friedrichs and H. Lewy, On the partial difference equations of mathematical physics, IBM J. Res. Develop., 11 (1967), 215-234.  doi: 10.1147/rd.112.0215.  Google Scholar

[8]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[9]

L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 2010. doi: 10.1090/gsm/019.  Google Scholar

[10]

G. E. Fasshauer, Solving partial differential equations by collocation with radial basis functions, Proceedings of Chamonix, 1997 (1996), 1-8.   Google Scholar

[11]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[12]

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. Google Scholar

[13]

I. E. LagarisA. Likas and D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Networks, 9 (1998), 987-1000.  doi: 10.1109/72.712178.  Google Scholar

[14]

I. E. LagarisA. C. Likas and D. G. Papageorgiou, Neural-network methods for boundary value problems with irregular boundaries, IEEE Trans. Neural Networks, 11 (2000), 1041-1049.  doi: 10.1109/72.870037.  Google Scholar

[15]

K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quart. Appl. Math., 2 (1944), 164-168.  doi: 10.1090/qam/10666.  Google Scholar

[16]

L. JianyuL. SiweiQ. Yingjian and H. Yaping, Numerical solution of elliptic partial differential equation using radial basis function neural networks, Neural Networks, 16 (2003), 729-734.  doi: 10.1016/S0893-6080(03)00083-2.  Google Scholar

[17]

J. Li and X. Li, Particle swarm optimization iterative identification algorithm and gradient iterative identification algorithm for Wiener systems with colored noise, Complexity, 2018 (2018), 8pp. doi: 10.1155/2018/7353171.  Google Scholar

[18]

X. Li, Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer, Neurocomputing, 12 (1996), 327-343.  doi: 10.1016/0925-2312(95)00070-4.  Google Scholar

[19]

D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Indust. Appl. Math., 11 (1963), 431-441.  doi: 10.1137/0111030.  Google Scholar

[20]

W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys., 5 (1943), 115-133.  doi: 10.1007/BF02478259.  Google Scholar

[21]

A. Paszke, et al., Automatic differentiation in PyTorch, Computer Science, (2017). Google Scholar

[22]

M. RaissiP. Perdikaris and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686-707.  doi: 10.1016/j.jcp.2018.10.045.  Google Scholar

[23]

S. J. Reddi, S. Kale and S. Kumar, On the convergence of ADAM and beyond, preprint, arXiv: 1904.09237. Google Scholar

[24]

S. A. Sarra, Adaptive radial basis function methods for time dependent partial differential equations, Appl. Numer. Math., 54 (2005), 79-94.  doi: 10.1016/j.apnum.2004.07.004.  Google Scholar

[25]

P. Tsilifis, I. Bilionis, I. Katsounaros and N. Zabaras, Computationally efficient variational approximations for Bayesian inverse problems, J. Verif. Valid. Uncert., 1 (2016), 13pp. doi: 10.1115/1.4034102.  Google Scholar

[26]

F. Yaman, V. G. Yakhno and R. Potthast, A survey on inverse problems for applied sciences, Math. Probl. Eng., 2013 (2013), 19pp. doi: 10.1155/2013/976837.  Google Scholar

show all references

References:
[1]

W. ArloffK. R. B. Schmitt and L. J. Venstrom, A parameter estimation method for stiff ordinary differential equations using particle swarm optimisation, Int. J. Comput. Sci. Math., 9 (2018), 419-432.  doi: 10.1504/IJCSM.2018.095506.  Google Scholar

[2]

A. G. Baydin, B. A. Pearlmutter, A. A. Radul and J. M. Siskind, Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res., 18 (2017), 43pp.  Google Scholar

[3]

J. Berg and K. Nystr{ö}m, Neural network augmented inverse problems for PDEs, preprint, arXiv: 1712.09685. Google Scholar

[4]

J. Berg and K. Nystr{ö}m, A unified deep artificial neural network approach to partial differential equations in complex geometries, Neurocomputing, 317 (2018), 28-41.  doi: 10.1016/j.neucom.2018.06.056.  Google Scholar

[5]

G. Chavet, Nonlinear Least Squares for Inverse Problems. Theoretical Foundations and Step-By-Step Guide for Applications, Scientific Computation, Springer, New York, 2009. doi: 10.1007/978-90-481-2785-6.  Google Scholar

[6]

N. E. Cotter, The Stone-Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks, 1 (1990), 290-295.  doi: 10.1109/72.80265.  Google Scholar

[7]

R. CourantK. Friedrichs and H. Lewy, On the partial difference equations of mathematical physics, IBM J. Res. Develop., 11 (1967), 215-234.  doi: 10.1147/rd.112.0215.  Google Scholar

[8]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[9]

L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 2010. doi: 10.1090/gsm/019.  Google Scholar

[10]

G. E. Fasshauer, Solving partial differential equations by collocation with radial basis functions, Proceedings of Chamonix, 1997 (1996), 1-8.   Google Scholar

[11]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[12]

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. Google Scholar

[13]

I. E. LagarisA. Likas and D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Networks, 9 (1998), 987-1000.  doi: 10.1109/72.712178.  Google Scholar

[14]

I. E. LagarisA. C. Likas and D. G. Papageorgiou, Neural-network methods for boundary value problems with irregular boundaries, IEEE Trans. Neural Networks, 11 (2000), 1041-1049.  doi: 10.1109/72.870037.  Google Scholar

[15]

K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quart. Appl. Math., 2 (1944), 164-168.  doi: 10.1090/qam/10666.  Google Scholar

[16]

L. JianyuL. SiweiQ. Yingjian and H. Yaping, Numerical solution of elliptic partial differential equation using radial basis function neural networks, Neural Networks, 16 (2003), 729-734.  doi: 10.1016/S0893-6080(03)00083-2.  Google Scholar

[17]

J. Li and X. Li, Particle swarm optimization iterative identification algorithm and gradient iterative identification algorithm for Wiener systems with colored noise, Complexity, 2018 (2018), 8pp. doi: 10.1155/2018/7353171.  Google Scholar

[18]

X. Li, Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer, Neurocomputing, 12 (1996), 327-343.  doi: 10.1016/0925-2312(95)00070-4.  Google Scholar

[19]

D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Indust. Appl. Math., 11 (1963), 431-441.  doi: 10.1137/0111030.  Google Scholar

[20]

W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys., 5 (1943), 115-133.  doi: 10.1007/BF02478259.  Google Scholar

[21]

A. Paszke, et al., Automatic differentiation in PyTorch, Computer Science, (2017). Google Scholar

[22]

M. RaissiP. Perdikaris and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686-707.  doi: 10.1016/j.jcp.2018.10.045.  Google Scholar

[23]

S. J. Reddi, S. Kale and S. Kumar, On the convergence of ADAM and beyond, preprint, arXiv: 1904.09237. Google Scholar

[24]

S. A. Sarra, Adaptive radial basis function methods for time dependent partial differential equations, Appl. Numer. Math., 54 (2005), 79-94.  doi: 10.1016/j.apnum.2004.07.004.  Google Scholar

[25]

P. Tsilifis, I. Bilionis, I. Katsounaros and N. Zabaras, Computationally efficient variational approximations for Bayesian inverse problems, J. Verif. Valid. Uncert., 1 (2016), 13pp. doi: 10.1115/1.4034102.  Google Scholar

[26]

F. Yaman, V. G. Yakhno and R. Potthast, A survey on inverse problems for applied sciences, Math. Probl. Eng., 2013 (2013), 19pp. doi: 10.1155/2013/976837.  Google Scholar

Figure 1.  Network architecture
Figure 2.  Experimental result for 1D transport equation
Figure 3.  Experimental result for 2D heat equation with $ u(0,x,y) = x(1-x)y(1-y) $
Figure 4.  Experimental result for 2D heat equation with $ u(0,x,y) = 1 \text{, if } (x,y) \in \Omega, 0 \text{, otherwise} $
Figure 5.  Experimental result for 2D wave equation
Figure 6.  Experimental result for Lotka-Volterra equation
Figure 7.  Experimental result for CFL condition
Algorithm 1: Training
1: procedure train(number of epochs)
2:   Initialize the nerural network.
3:   For number of epochs do
4:     sample $ z^1, z^2,..., z^m $ from uniform distribution over $ \Omega $
5:     sample $ z_I^1, z_I^2,..., z_I^m $ from uniform distribution over $ \{0\} \times\Omega $
6:     sample $ z_B^1, z_B^2,..., z_B^m $ from uniform distribution over $ \partial\Omega $
7:     sample k observation points $ z_O^1, z_O^2,..., z_O^k $
8:     Find the true value $ u_j = u_p(z_O^j) $ for $ j=1,2,...,k $
9:     Update the neural network by descending its stochastic gradient :
$\begin{equation} \nonumber \nabla_{w, b} [\frac{1}{m} \sum\limits_{i = 1}^m [L_p(u_N)(z^i)^2 + (u_N(z_I^i)-f(z_I^i))^2 + (u_N(z_B^i)-g(z_B^i))^2] + \frac{1}{k}\sum\limits_{j = 1}^k (u_N(z_O^j)-u_j)^2] \end{equation}$
10:   end for
11: end procedure
Algorithm 1: Training
1: procedure train(number of epochs)
2:   Initialize the nerural network.
3:   For number of epochs do
4:     sample $ z^1, z^2,..., z^m $ from uniform distribution over $ \Omega $
5:     sample $ z_I^1, z_I^2,..., z_I^m $ from uniform distribution over $ \{0\} \times\Omega $
6:     sample $ z_B^1, z_B^2,..., z_B^m $ from uniform distribution over $ \partial\Omega $
7:     sample k observation points $ z_O^1, z_O^2,..., z_O^k $
8:     Find the true value $ u_j = u_p(z_O^j) $ for $ j=1,2,...,k $
9:     Update the neural network by descending its stochastic gradient :
$\begin{equation} \nonumber \nabla_{w, b} [\frac{1}{m} \sum\limits_{i = 1}^m [L_p(u_N)(z^i)^2 + (u_N(z_I^i)-f(z_I^i))^2 + (u_N(z_B^i)-g(z_B^i))^2] + \frac{1}{k}\sum\limits_{j = 1}^k (u_N(z_O^j)-u_j)^2] \end{equation}$
10:   end for
11: end procedure
Table 1.  Information of grid and observation points
Data Generation
Grid Range Number of Grid Points Number of Observations
1D Transport $ (t,x) \in [0,1]\times[0,1] $ $ 17 \times 100 $ 17
2D Heat $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 13
2D Wave $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 61
Lotka-Volterra $ t \in [0,100] $ 20,000 40
Data Generation
Grid Range Number of Grid Points Number of Observations
1D Transport $ (t,x) \in [0,1]\times[0,1] $ $ 17 \times 100 $ 17
2D Heat $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 13
2D Wave $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 61
Lotka-Volterra $ t \in [0,100] $ 20,000 40
Table 2.  Neural network architecture
Neural Network Architecture
Fully Connected Layers Activation Functions Learning Rate
1D Transport 2(input)-128-256-128-1(output) ReLU $ 10^{-5} $
2D Heat 3(input)-128-128-1(output) Sin, Sigmoid $ 10^{-5} $
2D Wave 3(input)-128-256-128-1(output) Sin, Tanh $ 10^{-5} $
Lotka-Volterra 1(input)-64-64-2(output) Sin $ 10^{-4} $
Neural Network Architecture
Fully Connected Layers Activation Functions Learning Rate
1D Transport 2(input)-128-256-128-1(output) ReLU $ 10^{-5} $
2D Heat 3(input)-128-128-1(output) Sin, Sigmoid $ 10^{-5} $
2D Wave 3(input)-128-256-128-1(output) Sin, Tanh $ 10^{-5} $
Lotka-Volterra 1(input)-64-64-2(output) Sin $ 10^{-4} $
[1]

Zengyun Wang, Jinde Cao, Zuowei Cai, Lihong Huang. Finite-time stability of impulsive differential inclusion: Applications to discontinuous impulsive neural networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2677-2692. doi: 10.3934/dcdsb.2020200

[2]

Changpin Li, Zhiqiang Li. Asymptotic behaviors of solution to partial differential equation with Caputo–Hadamard derivative and fractional Laplacian: Hyperbolic case. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021023

[3]

Xiaochen Mao, Weijie Ding, Xiangyu Zhou, Song Wang, Xingyong Li. Complexity in time-delay networks of multiple interacting neural groups. Electronic Research Archive, , () : -. doi: 10.3934/era.2021022

[4]

Alexandr Mikhaylov, Victor Mikhaylov. Dynamic inverse problem for Jacobi matrices. Inverse Problems & Imaging, 2019, 13 (3) : 431-447. doi: 10.3934/ipi.2019021

[5]

Armin Lechleiter, Tobias Rienmüller. Factorization method for the inverse Stokes problem. Inverse Problems & Imaging, 2013, 7 (4) : 1271-1293. doi: 10.3934/ipi.2013.7.1271

[6]

Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1825-1838. doi: 10.3934/cpaa.2012.11.1825

[7]

Sergei Avdonin, Julian Edward. An inverse problem for quantum trees with observations at interior vertices. Networks & Heterogeneous Media, 2021, 16 (2) : 317-339. doi: 10.3934/nhm.2021008

[8]

Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, 2021, 14 (2) : 199-209. doi: 10.3934/krm.2021002

[9]

Xiaochun Gu, Fang Han, Zhijie Wang, Kaleem Kashif, Wenlian Lu. Enhancement of gamma oscillations in E/I neural networks by increase of difference between external inputs. Electronic Research Archive, , () : -. doi: 10.3934/era.2021035

[10]

Quan Hai, Shutang Liu. Mean-square delay-distribution-dependent exponential synchronization of chaotic neural networks with mixed random time-varying delays and restricted disturbances. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3097-3118. doi: 10.3934/dcdsb.2020221

[11]

Jianli Xiang, Guozheng Yan. The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation. Inverse Problems & Imaging, 2021, 15 (3) : 539-554. doi: 10.3934/ipi.2021004

[12]

Meng-Xue Chang, Bang-Sheng Han, Xiao-Ming Fan. Global dynamics of the solution for a bistable reaction diffusion equation with nonlocal effect. Electronic Research Archive, , () : -. doi: 10.3934/era.2021024

[13]

Mohsen Abdolhosseinzadeh, Mir Mohammad Alipour. Design of experiment for tuning parameters of an ant colony optimization method for the constrained shortest Hamiltonian path problem in the grid networks. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 321-332. doi: 10.3934/naco.2020028

[14]

Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2739-2776. doi: 10.3934/dcds.2020384

[15]

Andreas Neubauer. On Tikhonov-type regularization with approximated penalty terms. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021027

[16]

Hui Yang, Yuzhu Han. Initial boundary value problem for a strongly damped wave equation with a general nonlinearity. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021019

[17]

Emanuela R. S. Coelho, Valéria N. Domingos Cavalcanti, Vinicius A. Peralta. Exponential stability for a transmission problem of a nonlinear viscoelastic wave equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021055

[18]

Nhu N. Nguyen, George Yin. Stochastic partial differential equation models for spatially dependent predator-prey equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 117-139. doi: 10.3934/dcdsb.2019175

[19]

Abdulrazzaq T. Abed, Azzam S. Y. Aladool. Applying particle swarm optimization based on Padé approximant to solve ordinary differential equation. Numerical Algebra, Control & Optimization, 2021  doi: 10.3934/naco.2021008

[20]

Seddigheh Banihashemi, Hossein Jafaria, Afshin Babaei. A novel collocation approach to solve a nonlinear stochastic differential equation of fractional order involving a constant delay. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021025

2019 Impact Factor: 1.053

Metrics

  • PDF downloads (490)
  • HTML views (193)
  • Cited by (1)

[Back to Top]