\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems

  • * Corresponding author: Feng Bao

    * Corresponding author: Feng Bao 
The first author is supported by NSF grant DMS-1720222
Abstract / Introduction Full Text(HTML) Figure(11) Related Papers Cited by
  • We propose a stochastic gradient descent based optimization algorithm to solve the analytic continuation problem in which we extract real frequency spectra from imaginary time Quantum Monte Carlo data. The procedure of analytic continuation is an ill-posed inverse problem which is usually solved by regularized optimization methods, such like the Maximum Entropy method, or stochastic optimization methods. The main contribution of this work is to improve the performance of stochastic optimization approaches by introducing a supervised stochastic gradient descent algorithm to solve a flipped inverse system which processes the random solutions obtained by a type of Fast and Efficient Stochastic Optimization Method.

    Mathematics Subject Classification: Primary: 49N45; Secondary: 49M37.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Example 1. True spectrum

    Figure 2.  Example 1. (a) FESOM samples; (b) FESOM estimation

    Figure 3.  Example 1. Estimated spectrum learned from FESOM samples

    Figure 4.  Example 2. True spectrum

    Figure 5.  Example 2. (a) FESOM estimation; (b) Estimated spectrum learned from FESOM samples

    Figure 6.  Example 2. Comparison between SGD and MaxEnt

    Figure 7.  True spectrum

    Figure 8.  Example 3. Estimations for the spectrum

    Figure 9.  Example 3. Spectrum with fine feature in positive frequency region

    Figure 10.  Example 3. (a) MaxEnt estimation for $ A_2 $; (b) Comparison of MaxEnt in estimating $ A_1 $ (red) and $ A_2 $ (blue)

    Figure 11.  Example 3. (a) SGD estimation for $ A_1 $; (b) SGD estimation for $ A_2 $

  • [1] F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola and T. A. Maier, Fast and efficient stochastic optimization for analytic continuation, Physical Review B, 94 (2016), 125149. doi: 10.1103/PhysRevB.94.125149.
    [2] S. Fuchs, T. Pruschke and M. Jarrell, Analytic continuation of quantum monte carlo data by stochastic analytical inference, Physical Review E, 81 (2010), 056701. doi: 10.1103/PhysRevE.81.056701.
    [3] A. Georges, G. Kotliar, W. Krauth and M. J. Rosenberg, Self-consistent large-n expansion for normal-state properties of dilute magnetic alloys, Physical Review B, 1988, page 2036.
    [4] A. GeorgesG. KotliarW. Krauth and M. J. Rosenberg, Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions, Reviews of Modern Physics, 68 (1996), 13-125.  doi: 10.1103/RevModPhys.68.13.
    [5] S. F. Gull and J. Skilling, Maximum entropy method in image processing, IEE Proceedings F, 131 (1984), 646-659.  doi: 10.1049/ip-f-1.1984.0099.
    [6] M. Jarrell and J. Gubernatis, Bayesian inference and the analytic continuation of imaginary- time quantum monte carlo data, Physics Reports, 269 (1996), 133-195.  doi: 10.1016/0370-1573(95)00074-7.
    [7] Q. Li, C. Tai and W. E, Stochastic modified equations and dynamics of stochastic gradient algorithms I: Mathematical foundations, Journal of Machine Learning Research, 20 (2019), Paper No. 40, 47 pp.
    [8] A. S. MishchenkoN. V. Prokof'ev and A. Sakamoto, Diagrammatic quantum monte carlo study of the fröhlich polaron, Physical Review B, 62 (2000), 6317-6336.  doi: 10.1103/PhysRevB.62.6317.
    [9] D. NeedellN. Srebro and R. Ward, Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm, Mathematical Programming, 155 (2016), 549-573.  doi: 10.1007/s10107-015-0864-7.
    [10] N. V. Prokof'ev and B. V. Svistunov, Spectral analysis by the method of consistent constraints, Jetp Lett., 97 (2013), 649-653.  doi: 10.1134/S002136401311009X.
    [11] A. Sandvik, Stochastic method for analytic continuation of quantum monte carlo data, Physical Review B, (1998), 10287–10290.
    [12] I. Sato and H. Nakagawa, Convergence analysis of gradient descent stochastic algorithms, Proceedings of the 31st International Conference on Machine Learning, (2014), 982–990.
    [13] O. Shamir and T. Zhang, Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes, Proceedings of the 30th International Conference on Machine Learning, 2013, p28.
    [14] A. Shapiro and Y. Wardi, Convergence analysis of gradient descent stochastic algorithms, Journal of Optimization Theory and Aplications, 91 (1996), 439-454.  doi: 10.1007/BF02190104.
    [15] R. N. Silver, J. E. Gubernatis, D. S. Sivia and M. Jarrell, Spectral densities of the symmetric anderson mode, Physical Review Letters, 1990, 496–499.
    [16] R. Strack and D. Vollhardt, Dynamics of a hole in the t-j model with local disorder: Exact results for high dimensions, Physical Review B, 1992, 13852.
    [17] L. Wu, C. Ma and W. E, How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective, NeurIPS 2018, 2018, 8289–8298.
    [18] Y. Zhang, P. Liang and M. Charikar, A hitting time analysis of stochastic gradient langevin dynamics, Conference on Learning Theory, 2017, 1980–2022.
  • 加载中

Figures(11)

SHARE

Article Metrics

HTML views(3858) PDF downloads(856) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return