March  2020, 2(1): 1-17. doi: 10.3934/fods.2020001

Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems

1. 

Department of Mathematics, Florida State University, Tallahassee, Florida, USA

2. 

Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA

* Corresponding author: Feng Bao

Published  February 2020

Fund Project: The first author is supported by NSF grant DMS-1720222

We propose a stochastic gradient descent based optimization algorithm to solve the analytic continuation problem in which we extract real frequency spectra from imaginary time Quantum Monte Carlo data. The procedure of analytic continuation is an ill-posed inverse problem which is usually solved by regularized optimization methods, such like the Maximum Entropy method, or stochastic optimization methods. The main contribution of this work is to improve the performance of stochastic optimization approaches by introducing a supervised stochastic gradient descent algorithm to solve a flipped inverse system which processes the random solutions obtained by a type of Fast and Efficient Stochastic Optimization Method.

Citation: Feng Bao, Thomas Maier. Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems. Foundations of Data Science, 2020, 2 (1) : 1-17. doi: 10.3934/fods.2020001
References:
[1]

F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola and T. A. Maier, Fast and efficient stochastic optimization for analytic continuation, Physical Review B, 94 (2016), 125149. doi: 10.1103/PhysRevB.94.125149.

[2]

S. Fuchs, T. Pruschke and M. Jarrell, Analytic continuation of quantum monte carlo data by stochastic analytical inference, Physical Review E, 81 (2010), 056701. doi: 10.1103/PhysRevE.81.056701.

[3]

A. Georges, G. Kotliar, W. Krauth and M. J. Rosenberg, Self-consistent large-n expansion for normal-state properties of dilute magnetic alloys, Physical Review B, 1988, page 2036.

[4]

A. GeorgesG. KotliarW. Krauth and M. J. Rosenberg, Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions, Reviews of Modern Physics, 68 (1996), 13-125.  doi: 10.1103/RevModPhys.68.13.

[5]

S. F. Gull and J. Skilling, Maximum entropy method in image processing, IEE Proceedings F, 131 (1984), 646-659.  doi: 10.1049/ip-f-1.1984.0099.

[6]

M. Jarrell and J. Gubernatis, Bayesian inference and the analytic continuation of imaginary- time quantum monte carlo data, Physics Reports, 269 (1996), 133-195.  doi: 10.1016/0370-1573(95)00074-7.

[7]

Q. Li, C. Tai and W. E, Stochastic modified equations and dynamics of stochastic gradient algorithms I: Mathematical foundations, Journal of Machine Learning Research, 20 (2019), Paper No. 40, 47 pp.

[8]

A. S. MishchenkoN. V. Prokof'ev and A. Sakamoto, Diagrammatic quantum monte carlo study of the fröhlich polaron, Physical Review B, 62 (2000), 6317-6336.  doi: 10.1103/PhysRevB.62.6317.

[9]

D. NeedellN. Srebro and R. Ward, Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm, Mathematical Programming, 155 (2016), 549-573.  doi: 10.1007/s10107-015-0864-7.

[10]

N. V. Prokof'ev and B. V. Svistunov, Spectral analysis by the method of consistent constraints, Jetp Lett., 97 (2013), 649-653.  doi: 10.1134/S002136401311009X.

[11]

A. Sandvik, Stochastic method for analytic continuation of quantum monte carlo data, Physical Review B, (1998), 10287–10290.

[12]

I. Sato and H. Nakagawa, Convergence analysis of gradient descent stochastic algorithms, Proceedings of the 31st International Conference on Machine Learning, (2014), 982–990.

[13]

O. Shamir and T. Zhang, Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes, Proceedings of the 30th International Conference on Machine Learning, 2013, p28.

[14]

A. Shapiro and Y. Wardi, Convergence analysis of gradient descent stochastic algorithms, Journal of Optimization Theory and Aplications, 91 (1996), 439-454.  doi: 10.1007/BF02190104.

[15]

R. N. Silver, J. E. Gubernatis, D. S. Sivia and M. Jarrell, Spectral densities of the symmetric anderson mode, Physical Review Letters, 1990, 496–499.

[16]

R. Strack and D. Vollhardt, Dynamics of a hole in the t-j model with local disorder: Exact results for high dimensions, Physical Review B, 1992, 13852.

[17]

L. Wu, C. Ma and W. E, How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective, NeurIPS 2018, 2018, 8289–8298.

[18]

Y. Zhang, P. Liang and M. Charikar, A hitting time analysis of stochastic gradient langevin dynamics, Conference on Learning Theory, 2017, 1980–2022.

show all references

References:
[1]

F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola and T. A. Maier, Fast and efficient stochastic optimization for analytic continuation, Physical Review B, 94 (2016), 125149. doi: 10.1103/PhysRevB.94.125149.

[2]

S. Fuchs, T. Pruschke and M. Jarrell, Analytic continuation of quantum monte carlo data by stochastic analytical inference, Physical Review E, 81 (2010), 056701. doi: 10.1103/PhysRevE.81.056701.

[3]

A. Georges, G. Kotliar, W. Krauth and M. J. Rosenberg, Self-consistent large-n expansion for normal-state properties of dilute magnetic alloys, Physical Review B, 1988, page 2036.

[4]

A. GeorgesG. KotliarW. Krauth and M. J. Rosenberg, Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions, Reviews of Modern Physics, 68 (1996), 13-125.  doi: 10.1103/RevModPhys.68.13.

[5]

S. F. Gull and J. Skilling, Maximum entropy method in image processing, IEE Proceedings F, 131 (1984), 646-659.  doi: 10.1049/ip-f-1.1984.0099.

[6]

M. Jarrell and J. Gubernatis, Bayesian inference and the analytic continuation of imaginary- time quantum monte carlo data, Physics Reports, 269 (1996), 133-195.  doi: 10.1016/0370-1573(95)00074-7.

[7]

Q. Li, C. Tai and W. E, Stochastic modified equations and dynamics of stochastic gradient algorithms I: Mathematical foundations, Journal of Machine Learning Research, 20 (2019), Paper No. 40, 47 pp.

[8]

A. S. MishchenkoN. V. Prokof'ev and A. Sakamoto, Diagrammatic quantum monte carlo study of the fröhlich polaron, Physical Review B, 62 (2000), 6317-6336.  doi: 10.1103/PhysRevB.62.6317.

[9]

D. NeedellN. Srebro and R. Ward, Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm, Mathematical Programming, 155 (2016), 549-573.  doi: 10.1007/s10107-015-0864-7.

[10]

N. V. Prokof'ev and B. V. Svistunov, Spectral analysis by the method of consistent constraints, Jetp Lett., 97 (2013), 649-653.  doi: 10.1134/S002136401311009X.

[11]

A. Sandvik, Stochastic method for analytic continuation of quantum monte carlo data, Physical Review B, (1998), 10287–10290.

[12]

I. Sato and H. Nakagawa, Convergence analysis of gradient descent stochastic algorithms, Proceedings of the 31st International Conference on Machine Learning, (2014), 982–990.

[13]

O. Shamir and T. Zhang, Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes, Proceedings of the 30th International Conference on Machine Learning, 2013, p28.

[14]

A. Shapiro and Y. Wardi, Convergence analysis of gradient descent stochastic algorithms, Journal of Optimization Theory and Aplications, 91 (1996), 439-454.  doi: 10.1007/BF02190104.

[15]

R. N. Silver, J. E. Gubernatis, D. S. Sivia and M. Jarrell, Spectral densities of the symmetric anderson mode, Physical Review Letters, 1990, 496–499.

[16]

R. Strack and D. Vollhardt, Dynamics of a hole in the t-j model with local disorder: Exact results for high dimensions, Physical Review B, 1992, 13852.

[17]

L. Wu, C. Ma and W. E, How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective, NeurIPS 2018, 2018, 8289–8298.

[18]

Y. Zhang, P. Liang and M. Charikar, A hitting time analysis of stochastic gradient langevin dynamics, Conference on Learning Theory, 2017, 1980–2022.

Figure 1.  Example 1. True spectrum
Figure 2.  Example 1. (a) FESOM samples; (b) FESOM estimation
Figure 3.  Example 1. Estimated spectrum learned from FESOM samples
Figure 4.  Example 2. True spectrum
Figure 5.  Example 2. (a) FESOM estimation; (b) Estimated spectrum learned from FESOM samples
Figure 6.  Example 2. Comparison between SGD and MaxEnt
Figure 7.  True spectrum
Figure 8.  Example 3. Estimations for the spectrum
Figure 9.  Example 3. Spectrum with fine feature in positive frequency region
Figure 10.  Example 3. (a) MaxEnt estimation for $ A_2 $; (b) Comparison of MaxEnt in estimating $ A_1 $ (red) and $ A_2 $ (blue)
Figure 11.  Example 3. (a) SGD estimation for $ A_1 $; (b) SGD estimation for $ A_2 $
[1]

Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial and Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934/jimo.2013.9.595

[2]

Zhongqi Yin. A quantitative internal unique continuation for stochastic parabolic equations. Mathematical Control and Related Fields, 2015, 5 (1) : 165-176. doi: 10.3934/mcrf.2015.5.165

[3]

David W. Pravica, Michael J. Spurr. Analytic continuation into the future. Conference Publications, 2003, 2003 (Special) : 709-716. doi: 10.3934/proc.2003.2003.709

[4]

Tengteng Yu, Xin-Wei Liu, Yu-Hong Dai, Jie Sun. Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization. Journal of Industrial and Management Optimization, 2022, 18 (4) : 2611-2631. doi: 10.3934/jimo.2021084

[5]

Zhenhuan Yang, Wei Shen, Yiming Ying, Xiaoming Yuan. Stochastic AUC optimization with general loss. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4191-4212. doi: 10.3934/cpaa.2020188

[6]

Peng Gao. Unique continuation property for stochastic nonclassical diffusion equations and stochastic linearized Benjamin-Bona-Mahony equations. Discrete and Continuous Dynamical Systems - B, 2019, 24 (6) : 2493-2510. doi: 10.3934/dcdsb.2018262

[7]

Mingshang Hu. Stochastic global maximum principle for optimization with recursive utilities. Probability, Uncertainty and Quantitative Risk, 2017, 2 (0) : 1-. doi: 10.1186/s41546-017-0014-7

[8]

Martin Heida, Stefan Neukamm, Mario Varga. Stochastic homogenization of $ \Lambda $-convex gradient flows. Discrete and Continuous Dynamical Systems - S, 2021, 14 (1) : 427-453. doi: 10.3934/dcdss.2020328

[9]

King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control and Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

[10]

Yuk L. Yung, Cameron Taketa, Ross Cheung, Run-Lie Shia. Infinite sum of the product of exponential and logarithmic functions, its analytic continuation, and application. Discrete and Continuous Dynamical Systems - B, 2010, 13 (1) : 229-248. doi: 10.3934/dcdsb.2010.13.229

[11]

Jan Boman. Unique continuation of microlocally analytic distributions and injectivity theorems for the ray transform. Inverse Problems and Imaging, 2010, 4 (4) : 619-630. doi: 10.3934/ipi.2010.4.619

[12]

Xuping Xie, Feng Bao, Thomas Maier, Clayton Webster. Analytic continuation of noisy data using Adams Bashforth residual neural network. Discrete and Continuous Dynamical Systems - S, 2022, 15 (4) : 877-892. doi: 10.3934/dcdss.2021088

[13]

Xiaming Chen. Kernel-based online gradient descent using distributed approach. Mathematical Foundations of Computing, 2019, 2 (1) : 1-9. doi: 10.3934/mfc.2019001

[14]

Ting Hu. Kernel-based maximum correntropy criterion with gradient descent method. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4159-4177. doi: 10.3934/cpaa.2020186

[15]

Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial and Management Optimization, 2008, 4 (4) : 739-755. doi: 10.3934/jimo.2008.4.739

[16]

Yacine Chitour, Zhenyu Liao, Romain Couillet. A geometric approach of gradient descent algorithms in linear neural networks. Mathematical Control and Related Fields, 2022  doi: 10.3934/mcrf.2022021

[17]

Ardeshir Ahmadi, Hamed Davari-Ardakani. A multistage stochastic programming framework for cardinality constrained portfolio optimization. Numerical Algebra, Control and Optimization, 2017, 7 (3) : 359-377. doi: 10.3934/naco.2017023

[18]

Haodong Yu, Jie Sun. Robust stochastic optimization with convex risk measures: A discretized subgradient scheme. Journal of Industrial and Management Optimization, 2021, 17 (1) : 81-99. doi: 10.3934/jimo.2019100

[19]

Jean-Paul Arnaout, Georges Arnaout, John El Khoury. Simulation and optimization of ant colony optimization algorithm for the stochastic uncapacitated location-allocation problem. Journal of Industrial and Management Optimization, 2016, 12 (4) : 1215-1225. doi: 10.3934/jimo.2016.12.1215

[20]

Ruilin Li, Xin Wang, Hongyuan Zha, Molei Tao. Improving sampling accuracy of stochastic gradient MCMC methods via non-uniform subsampling of gradients. Discrete and Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021157

 Impact Factor: 

Article outline

Figures and Tables

[Back to Top]