doi: 10.3934/dcdss.2021088
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

Analytic continuation of noisy data using Adams Bashforth residual neural network

1. 

New York University, New York, NY, 10012

2. 

Florida State University, Tallahassee, FL, 32304

3. 

Oak Ridge National Laboratory, Oak Ridge, TN, 37830

4. 

University of Tennessee-Knoxville, Knoxville, TN, 37916

* Corresponding author

Received  February 2021 Revised  April 2021 Early access August 2021

We propose a data-driven learning framework for the analytic continuation problem in numerical quantum many-body physics. Designing an accurate and efficient framework for the analytic continuation of imaginary time using computational data is a grand challenge that has hindered meaningful links with experimental data. The standard Maximum Entropy (MaxEnt)-based method is limited by the quality of the computational data and the availability of prior information. Also, the MaxEnt is not able to solve the inversion problem under high level of noise in the data. Here we introduce a novel learning model for the analytic continuation problem using a Adams-Bashforth residual neural network (AB-ResNet). The advantage of this deep learning network is that it is model independent and, therefore, does not require prior information concerning the quantity of interest given by the spectral function. More importantly, the ResNet-based model achieves higher accuracy than MaxEnt for data with higher level of noise. Finally, numerical examples show that the developed AB-ResNet is able to recover the spectral function with accuracy comparable to MaxEnt where the noise level is relatively small.

Citation: Xuping Xie, Feng Bao, Thomas Maier, Clayton Webster. Analytic continuation of noisy data using Adams Bashforth residual neural network. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021088
References:
[1]

L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression methods for inverting fredholm integrals: Formalism and application to analytical continuation, arXiv preprint arXiv: 1612.04895, 2016. Google Scholar

[2]

L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression method for solving fredholm integral equations arising in the analytic continuation problem of quantum physics, Inverse Problems, 33 (2017), 115007. doi: 10.1088/1361-6420/aa8d93.  Google Scholar

[3]

U. M. Ascher and L. R. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, volume 61, SIAM, Philadelphia, PA, 1998. doi: 10.1137/1.9781611971392.  Google Scholar

[4]

F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola and T. A. Maier, Fast and efficient stochastic optimization for analytic continuation, Physical Review B, 94 (2016), 125149. doi: 10.1103/PhysRevB.94.125149.  Google Scholar

[5]

K. S. D. Beach, Identifying the maximum entropy method as a special limit of stochastic analytic continuation, arXiv preprint arXiv: cond-mat/0403055, 2004. Google Scholar

[6]

C. BeckE. Weinan and A. Jentzen, Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations, Journal of Nonlinear Science, 29 (2019), 1563-1619.  doi: 10.1007/s00332-018-9525-3.  Google Scholar

[7]

G. BertainaD. E. Galli and E. Vitali, Statistical and computational intelligence approach to analytic continuation in quantum monte carlo, Advances in Physics: X, 2 (2017), 302-323.  doi: 10.1080/23746149.2017.1288585.  Google Scholar

[8]

Y. Cao, H. Zhang, R. Archibald and F. Bao, A backward sde method for uncertainty quantification in deep learning, arXiv preprint arXiv: 2011.14145, 2021. Google Scholar

[9]

B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert and E. Holtham, Reversible architectures for arbitrarily deep residual neural networks, In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Google Scholar

[10]

B. Chang, L. Meng, E. Haber, F. Tung and D. Begert, Multi-level residual networks from dynamical systems view, In International Conference on Learning Representations, 2018. Google Scholar

[11]

T. Chen, Y. Rubanova, J. Bettencourt and D. K. Duvenaud, Neural ordinary differential equations, In Advances in Neural Information Processing Systems, 2018, 6571–6583. Google Scholar

[12]

K. Dahm and A. Keller, Learning light transport the reinforced way, In International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, 241, Springer, 2018,181–195. doi: 10.1007/978-3-319-91436-7_9.  Google Scholar

[13]

F. Bao and T. Maier, Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems, Foundations of Data Science, 2 (2020), 1-17.  doi: 10.3934/fods.2020001.  Google Scholar

[14]

W. E and Q. Wang, Exponential convergence of the deep neural network approximation for analytic functions, Sci. China Math., 61 (2018), 1733–1740. arXiv preprint arXiv: 1807.00297, 2018. doi: 10.1007/s11425-018-9387-x.  Google Scholar

[15]

R. Fournier, L. Wang, O. V. Yazyev and Q. Wu, Artificial neural network approach to the analytic continuation problem, arXiv preprint arXiv: 1810.00913, 2018. Phys. Rev. Lett., 124 (2020), 056401, 6 pp. doi: 10.1103/PhysRevLett.124.056401.  Google Scholar

[16]

S. Fuchs, T. Pruschke and M. Jarrell, Analytic continuation of quantum Monte Carlo data by stochastic analytical inference, Physical Review E, 81 (2010), 056701. doi: 10.1103/PhysRevE.81.056701.  Google Scholar

[17]

S. F. Gull and J. Skilling, Maximum entropy method in image processing, IEE Proceedings F (Communications, Radar and Signal Processing), 131 (1984), 646-659.  doi: 10.1049/ip-f-1.1984.0099.  Google Scholar

[18]

E. Haber and L. Ruthotto, Stable architectures for deep neural networks, Inverse Problems, 34 (2017), 014004. doi: 10.1088/1361-6420/aa9a90.  Google Scholar

[19]

K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016,770–778. doi: 10.1109/CVPR.2016.90.  Google Scholar

[20]

R. Hecht-Nielsen, Theory of the backpropagation neural network, In Neural Networks for Perception, Elsevier, 1992, 65–93. doi: 10.1016/B978-0-12-741252-8.50010-8.  Google Scholar

[21]

G. HintonL. DengD. YuG. E. DahlA.-R. MohamedN. JaitlyA. SeniorV. VanhouckeP. NguyenT. N. Sainath and B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Processing Magazine, 29 (2012), 82-97.  doi: 10.1109/MSP.2012.2205597.  Google Scholar

[22]

M. Jarrell and J. E. Gubernatis, Bayesian inference and the analytic continuation of imaginary-time quantum Monte Carlo data, Physics Reports, 269 (1996), 133-195.  doi: 10.1016/0370-1573(95)00074-7.  Google Scholar

[23]

K. H. JinM. T. McCannE. Froustey and M. Unser, Deep convolutional neural network for inverse problems in imaging, IEEE Transactions on Image Processing, 26 (2017), 4509-4522.  doi: 10.1109/TIP.2017.2713099.  Google Scholar

[24]

A. KrizhevskyI. Sutskever and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 25 (2012), 1097-1105.   Google Scholar

[25]

Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539.  Google Scholar

[26]

Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. Muller, E. Sackinger, et al, Comparison of learning algorithms for handwritten digit recognition, In International Conference on Artificial Neural Networks, volume 60. Perth, Australia, 1995, 53–60. Google Scholar

[27]

R. LevyJ. P. F. LeBlanc and E. Gull, Implementation of the maximum entropy method for analytic continuation, Computer Physics Communications, 215 (2017), 149-155.  doi: 10.1016/j.cpc.2017.01.018.  Google Scholar

[28]

H. Li, J. Schwab, S. Antholzer and M. Haltmeier, Nett: Solving inverse problems with deep neural networks, Inverse Problems, 36 (2020), 065005. doi: 10.1088/1361-6420/ab6d57.  Google Scholar

[29]

Q. Li, C. Tai and W. E, Stochastic modified equations and dynamics of stochastic gradient algorithms I: Mathematical foundations, Journal of Machine Learning Research, 20 (2019), Paper No. 40, 47 pp.  Google Scholar

[30]

H. Lin and S. Jegelka, Resnet with one-neuron hidden layers is a universal approximator, In Advances in Neural Information Processing Systems, 2018, 6169–6178. Google Scholar

[31]

Z. Long, Y, Lu, X. Ma and B. Dong, PDE-net: Learning PDEs from data, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3208–3216. Google Scholar

[32]

Y. Lu, A. Zhong, Q. Li and B. Dong, Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3282–3291. Google Scholar

[33]

C. Ma, J. Wang and Weinan E, Model reduction with memory and the machine learning of dynamical systems, arXiv preprint arXiv:1808.04258, 2018. Commun. Comput. Phys., 25 (2019), 947-962. doi: 10.4208/cicp.oa-2018-0269.  Google Scholar

[34]

A. S. MishchenkoN. V. Prokof'evA. Sakamoto and B. V. Svistunov, Diagrammatic quantum Monte Carlo study of the Fröhlich polaron, Physical Review B, 62 (2000), 6317-6336.  doi: 10.1103/PhysRevB.62.6317.  Google Scholar

[35]

J. Otsuki, M. Ohzeki, H. Shinaoka and K. Yoshimi, Sparse modeling approach to analytical continuation of imaginary-time quantum monte carlo data, Physical Review E, 95 (2017), 061302. doi: 10.1103/PhysRevE.95.061302.  Google Scholar

[36]

E. Pavarini, E. Koch, F. Anders and M. Jarrell, Correlated electrons: From models to materials, Reihe Modeling and Simulation, 2 (2012). Google Scholar

[37]

N. V. Prokof'ev and B. V. Svistunov, Spectral analysis by the method of consistent constraints, JETP Lett., 97 (2013), 649-653.  doi: 10.1134/S002136401311009X.  Google Scholar

[38]

A. W. Sandvik, Stochastic method for analytic continuation of quantum Monte Carlo data, Physical Review B, 57 (1998), 10287-10290.  doi: 10.1103/PhysRevB.57.10287.  Google Scholar

[39]

R. N. SilverJ. E. GubernatisD. S. Sivia and M. Jarrell, Spectral densities of the symmetric Anderson model, Physical Review Letters, 65 (1990), 496-499.  doi: 10.1103/PhysRevLett.65.496.  Google Scholar

[40]

B. Wang, X. Luo, Z. Li, W. Zhu, Z. Shi and S. Osher, Deep neural nets with interpolating function as output activation, In Advances in Neural Information Processing Systems, 2018,743–753. Google Scholar

[41]

L. Wu, C. Ma and W. E, How SGD selects the global minima in over-parameterized learning: A dynamical stability perspective, In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., 2018, 8289–8298. Google Scholar

[42]

X. Xie, C. Webster and T. Iliescu, Closure learning for nonlinear model reduction using deep residual neural network, phFluids, 5 (2020), 39. doi: 10.3390/fluids5010039.  Google Scholar

[43]

X. Xie, G. Zhang and C. G. Webster, Non-intrusive inference reduced order model for fluids using deep multistep neural network, phMathematics, 7 (2019), 757. doi: 10.3390/math7080757.  Google Scholar

[44]

H. Yoon, J.-H. Sim and M. J. Han, Analytic continuation via domain knowledge free machine learning, Physical Review B, 98 (2018), 245101. doi: 10.1103/PhysRevB.98.245101.  Google Scholar

[45]

G. ZhangB. Eddy Patuwo and M. Y. Hu, Forecasting with artificial neural networks: The state of the art, International Journal of Forecasting, 14 (1998), 35-62.  doi: 10.1016/S0169-2070(97)00044-7.  Google Scholar

show all references

References:
[1]

L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression methods for inverting fredholm integrals: Formalism and application to analytical continuation, arXiv preprint arXiv: 1612.04895, 2016. Google Scholar

[2]

L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression method for solving fredholm integral equations arising in the analytic continuation problem of quantum physics, Inverse Problems, 33 (2017), 115007. doi: 10.1088/1361-6420/aa8d93.  Google Scholar

[3]

U. M. Ascher and L. R. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, volume 61, SIAM, Philadelphia, PA, 1998. doi: 10.1137/1.9781611971392.  Google Scholar

[4]

F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola and T. A. Maier, Fast and efficient stochastic optimization for analytic continuation, Physical Review B, 94 (2016), 125149. doi: 10.1103/PhysRevB.94.125149.  Google Scholar

[5]

K. S. D. Beach, Identifying the maximum entropy method as a special limit of stochastic analytic continuation, arXiv preprint arXiv: cond-mat/0403055, 2004. Google Scholar

[6]

C. BeckE. Weinan and A. Jentzen, Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations, Journal of Nonlinear Science, 29 (2019), 1563-1619.  doi: 10.1007/s00332-018-9525-3.  Google Scholar

[7]

G. BertainaD. E. Galli and E. Vitali, Statistical and computational intelligence approach to analytic continuation in quantum monte carlo, Advances in Physics: X, 2 (2017), 302-323.  doi: 10.1080/23746149.2017.1288585.  Google Scholar

[8]

Y. Cao, H. Zhang, R. Archibald and F. Bao, A backward sde method for uncertainty quantification in deep learning, arXiv preprint arXiv: 2011.14145, 2021. Google Scholar

[9]

B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert and E. Holtham, Reversible architectures for arbitrarily deep residual neural networks, In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Google Scholar

[10]

B. Chang, L. Meng, E. Haber, F. Tung and D. Begert, Multi-level residual networks from dynamical systems view, In International Conference on Learning Representations, 2018. Google Scholar

[11]

T. Chen, Y. Rubanova, J. Bettencourt and D. K. Duvenaud, Neural ordinary differential equations, In Advances in Neural Information Processing Systems, 2018, 6571–6583. Google Scholar

[12]

K. Dahm and A. Keller, Learning light transport the reinforced way, In International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, 241, Springer, 2018,181–195. doi: 10.1007/978-3-319-91436-7_9.  Google Scholar

[13]

F. Bao and T. Maier, Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems, Foundations of Data Science, 2 (2020), 1-17.  doi: 10.3934/fods.2020001.  Google Scholar

[14]

W. E and Q. Wang, Exponential convergence of the deep neural network approximation for analytic functions, Sci. China Math., 61 (2018), 1733–1740. arXiv preprint arXiv: 1807.00297, 2018. doi: 10.1007/s11425-018-9387-x.  Google Scholar

[15]

R. Fournier, L. Wang, O. V. Yazyev and Q. Wu, Artificial neural network approach to the analytic continuation problem, arXiv preprint arXiv: 1810.00913, 2018. Phys. Rev. Lett., 124 (2020), 056401, 6 pp. doi: 10.1103/PhysRevLett.124.056401.  Google Scholar

[16]

S. Fuchs, T. Pruschke and M. Jarrell, Analytic continuation of quantum Monte Carlo data by stochastic analytical inference, Physical Review E, 81 (2010), 056701. doi: 10.1103/PhysRevE.81.056701.  Google Scholar

[17]

S. F. Gull and J. Skilling, Maximum entropy method in image processing, IEE Proceedings F (Communications, Radar and Signal Processing), 131 (1984), 646-659.  doi: 10.1049/ip-f-1.1984.0099.  Google Scholar

[18]

E. Haber and L. Ruthotto, Stable architectures for deep neural networks, Inverse Problems, 34 (2017), 014004. doi: 10.1088/1361-6420/aa9a90.  Google Scholar

[19]

K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016,770–778. doi: 10.1109/CVPR.2016.90.  Google Scholar

[20]

R. Hecht-Nielsen, Theory of the backpropagation neural network, In Neural Networks for Perception, Elsevier, 1992, 65–93. doi: 10.1016/B978-0-12-741252-8.50010-8.  Google Scholar

[21]

G. HintonL. DengD. YuG. E. DahlA.-R. MohamedN. JaitlyA. SeniorV. VanhouckeP. NguyenT. N. Sainath and B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Processing Magazine, 29 (2012), 82-97.  doi: 10.1109/MSP.2012.2205597.  Google Scholar

[22]

M. Jarrell and J. E. Gubernatis, Bayesian inference and the analytic continuation of imaginary-time quantum Monte Carlo data, Physics Reports, 269 (1996), 133-195.  doi: 10.1016/0370-1573(95)00074-7.  Google Scholar

[23]

K. H. JinM. T. McCannE. Froustey and M. Unser, Deep convolutional neural network for inverse problems in imaging, IEEE Transactions on Image Processing, 26 (2017), 4509-4522.  doi: 10.1109/TIP.2017.2713099.  Google Scholar

[24]

A. KrizhevskyI. Sutskever and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 25 (2012), 1097-1105.   Google Scholar

[25]

Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539.  Google Scholar

[26]

Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. Muller, E. Sackinger, et al, Comparison of learning algorithms for handwritten digit recognition, In International Conference on Artificial Neural Networks, volume 60. Perth, Australia, 1995, 53–60. Google Scholar

[27]

R. LevyJ. P. F. LeBlanc and E. Gull, Implementation of the maximum entropy method for analytic continuation, Computer Physics Communications, 215 (2017), 149-155.  doi: 10.1016/j.cpc.2017.01.018.  Google Scholar

[28]

H. Li, J. Schwab, S. Antholzer and M. Haltmeier, Nett: Solving inverse problems with deep neural networks, Inverse Problems, 36 (2020), 065005. doi: 10.1088/1361-6420/ab6d57.  Google Scholar

[29]

Q. Li, C. Tai and W. E, Stochastic modified equations and dynamics of stochastic gradient algorithms I: Mathematical foundations, Journal of Machine Learning Research, 20 (2019), Paper No. 40, 47 pp.  Google Scholar

[30]

H. Lin and S. Jegelka, Resnet with one-neuron hidden layers is a universal approximator, In Advances in Neural Information Processing Systems, 2018, 6169–6178. Google Scholar

[31]

Z. Long, Y, Lu, X. Ma and B. Dong, PDE-net: Learning PDEs from data, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3208–3216. Google Scholar

[32]

Y. Lu, A. Zhong, Q. Li and B. Dong, Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3282–3291. Google Scholar

[33]

C. Ma, J. Wang and Weinan E, Model reduction with memory and the machine learning of dynamical systems, arXiv preprint arXiv:1808.04258, 2018. Commun. Comput. Phys., 25 (2019), 947-962. doi: 10.4208/cicp.oa-2018-0269.  Google Scholar

[34]

A. S. MishchenkoN. V. Prokof'evA. Sakamoto and B. V. Svistunov, Diagrammatic quantum Monte Carlo study of the Fröhlich polaron, Physical Review B, 62 (2000), 6317-6336.  doi: 10.1103/PhysRevB.62.6317.  Google Scholar

[35]

J. Otsuki, M. Ohzeki, H. Shinaoka and K. Yoshimi, Sparse modeling approach to analytical continuation of imaginary-time quantum monte carlo data, Physical Review E, 95 (2017), 061302. doi: 10.1103/PhysRevE.95.061302.  Google Scholar

[36]

E. Pavarini, E. Koch, F. Anders and M. Jarrell, Correlated electrons: From models to materials, Reihe Modeling and Simulation, 2 (2012). Google Scholar

[37]

N. V. Prokof'ev and B. V. Svistunov, Spectral analysis by the method of consistent constraints, JETP Lett., 97 (2013), 649-653.  doi: 10.1134/S002136401311009X.  Google Scholar

[38]

A. W. Sandvik, Stochastic method for analytic continuation of quantum Monte Carlo data, Physical Review B, 57 (1998), 10287-10290.  doi: 10.1103/PhysRevB.57.10287.  Google Scholar

[39]

R. N. SilverJ. E. GubernatisD. S. Sivia and M. Jarrell, Spectral densities of the symmetric Anderson model, Physical Review Letters, 65 (1990), 496-499.  doi: 10.1103/PhysRevLett.65.496.  Google Scholar

[40]

B. Wang, X. Luo, Z. Li, W. Zhu, Z. Shi and S. Osher, Deep neural nets with interpolating function as output activation, In Advances in Neural Information Processing Systems, 2018,743–753. Google Scholar

[41]

L. Wu, C. Ma and W. E, How SGD selects the global minima in over-parameterized learning: A dynamical stability perspective, In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., 2018, 8289–8298. Google Scholar

[42]

X. Xie, C. Webster and T. Iliescu, Closure learning for nonlinear model reduction using deep residual neural network, phFluids, 5 (2020), 39. doi: 10.3390/fluids5010039.  Google Scholar

[43]

X. Xie, G. Zhang and C. G. Webster, Non-intrusive inference reduced order model for fluids using deep multistep neural network, phMathematics, 7 (2019), 757. doi: 10.3390/math7080757.  Google Scholar

[44]

H. Yoon, J.-H. Sim and M. J. Han, Analytic continuation via domain knowledge free machine learning, Physical Review B, 98 (2018), 245101. doi: 10.1103/PhysRevB.98.245101.  Google Scholar

[45]

G. ZhangB. Eddy Patuwo and M. Y. Hu, Forecasting with artificial neural networks: The state of the art, International Journal of Forecasting, 14 (1998), 35-62.  doi: 10.1016/S0169-2070(97)00044-7.  Google Scholar

Figure 1.  Illustration of data-driven learning framework for analytic continuation
Figure 2.  Single hidden layer neural network structure
Figure 3.  Residual neural network block
Figure 4.  Multistep neural network architecture
Figure 5.  One data sample from the training set $ G(\tau) $ (top left), Legendre representation $ G_l $ (top right), and target spectral density $ A(\omega) $ (bottom)
Figure 6.  The training performance from AB1-ResNet, AB2-ResNet, and AB3-ResNet structure with data noise $ 10^{-2} $
Figure 7.  Three different spectral density function $ A(\omega) $ generated from AB3-ResNet and Maxent (dark line). The left column represents results from dataset with noise level $ 10^{-2} $, the right column shows results obtained from the dataset under noise level $ 10^{-3} $
Figure 8.  The comparison of predicted spectral function between different AB-ResNet
[1]

Feng Bao, Thomas Maier. Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems. Foundations of Data Science, 2020, 2 (1) : 1-17. doi: 10.3934/fods.2020001

[2]

Weishi Yin, Jiawei Ge, Pinchao Meng, Fuheng Qu. A neural network method for the inverse scattering problem of impenetrable cavities. Electronic Research Archive, 2020, 28 (2) : 1123-1142. doi: 10.3934/era.2020062

[3]

King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

[4]

David W. Pravica, Michael J. Spurr. Analytic continuation into the future. Conference Publications, 2003, 2003 (Special) : 709-716. doi: 10.3934/proc.2003.2003.709

[5]

Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks & Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011

[6]

Hui-Qiang Ma, Nan-Jing Huang. Neural network smoothing approximation method for stochastic variational inequality problems. Journal of Industrial & Management Optimization, 2015, 11 (2) : 645-660. doi: 10.3934/jimo.2015.11.645

[7]

Hongtruong Pham, Xiwen Lu. The inverse parallel machine scheduling problem with minimum total completion time. Journal of Industrial & Management Optimization, 2014, 10 (2) : 613-620. doi: 10.3934/jimo.2014.10.613

[8]

Cheng-Dar Liou. Optimization analysis of the machine repair problem with multiple vacations and working breakdowns. Journal of Industrial & Management Optimization, 2015, 11 (1) : 83-104. doi: 10.3934/jimo.2015.11.83

[9]

Fengqiu Liu, Xiaoping Xue. Subgradient-based neural network for nonconvex optimization problems in support vector machines with indefinite kernels. Journal of Industrial & Management Optimization, 2016, 12 (1) : 285-301. doi: 10.3934/jimo.2016.12.285

[10]

Lucie Baudouin, Emmanuelle Crépeau, Julie Valein. Global Carleman estimate on a network for the wave equation and application to an inverse problem. Mathematical Control & Related Fields, 2011, 1 (3) : 307-330. doi: 10.3934/mcrf.2011.1.307

[11]

Xiaoli Feng, Meixia Zhao, Peijun Li, Xu Wang. An inverse source problem for the stochastic wave equation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021055

[12]

Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011

[13]

Ana Rita Nogueira, João Gama, Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics & Games, 2021, 8 (3) : 203-231. doi: 10.3934/jdg.2021008

[14]

Yi-Kuei Lin, Cheng-Ta Yeh. Reliability optimization of component assignment problem for a multistate network in terms of minimal cuts. Journal of Industrial & Management Optimization, 2011, 7 (1) : 211-227. doi: 10.3934/jimo.2011.7.211

[15]

Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Stability of the dynamics of an asymmetric neural network. Communications on Pure & Applied Analysis, 2009, 8 (2) : 655-671. doi: 10.3934/cpaa.2009.8.655

[16]

Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021006

[17]

Li-Fang Dai, Mao-Lin Liang, Wei-Yuan Ma. Optimization problems on the rank of the solution to left and right inverse eigenvalue problem. Journal of Industrial & Management Optimization, 2015, 11 (1) : 171-183. doi: 10.3934/jimo.2015.11.171

[18]

Yuk L. Yung, Cameron Taketa, Ross Cheung, Run-Lie Shia. Infinite sum of the product of exponential and logarithmic functions, its analytic continuation, and application. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 229-248. doi: 10.3934/dcdsb.2010.13.229

[19]

Jan Boman. Unique continuation of microlocally analytic distributions and injectivity theorems for the ray transform. Inverse Problems & Imaging, 2010, 4 (4) : 619-630. doi: 10.3934/ipi.2010.4.619

[20]

Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031

2020 Impact Factor: 2.425

Metrics

  • PDF downloads (33)
  • HTML views (120)
  • Cited by (0)

Other articles
by authors

[Back to Top]