\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

A survey of numerical solutions for stochastic control problems: Some recent progress

Abstract / Introduction Full Text(HTML) Figure(18) Related Papers Cited by
  • This paper presents a survey on some of the recent progress on numerical solutions for controlled switching diffusions. We begin by recalling the basics of switching diffusions and controlled switching diffusions. We then present regular controls and singular controls. The main objective of this paper is to provide a survey on some recent advances on Markov chain approximation methods for solving stochastic control problems numerically. A number of applications in insurance, mathematical biology, epidemiology, and economics are presented. Several numerical examples are provided for demonstration.

    Mathematics Subject Classification: Primary: 93E20; 93E11; 93E99; 93E03Secondary: 60J10; 60J60.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Value function and control strategies using MCAM (20000 iterations)

    Figure 2.  Value function and control strategies using MCAM (1000 iterations)

    Figure 3.  Value function and control strategies using hybrid deep learning MCAM (20000 iterations)

    Figure 4.  Value function and control strategies using hybrid deep learning MCAM (1000 iterations)

    Figure 5.  Value function and control strategies using MCAM (20000 iterations)

    Figure 6.  Value function and control strategies using hybrid deep learning MCAM (20000 iterations)

    Figure 7.  Value function and control strategies of regime 1 using MCAM (20000 iterations)

    Figure 8.  Value function and control strategies of regime 2 using MCAM (20000 iterations)

    Figure 9.  Comparison of final results between regimes 1 and 2

    Figure 10.  Value function and control strategies of regime 1 using hybrid deep learning MCAM (20000 iterations)

    Figure 11.  Value function and control strategies of regime 2 using hybrid deep learning MCAM (20000 iterations)

    Figure 12.  Comparison of final results between regimes 1 and 2

    Figure 13.  The value function and the optimal control type (1: harvesting of species 1, 2: harvesting of species 2, 0: non-harvesting)

    Figure 14.  The value function and the optimal control type (1: harvesting of species 1, 2: harvesting of species 2, 0: seeding)

    Figure 15.  The optimal seeding rates

    Figure 16.  Value function (left) and optimal control (right) when $ f(i, \alpha, \xi) = 1.75+ \alpha i+ \alpha i\xi^2 $

    Figure 17.  Value function (left) and optimal control (right) when $ f(i, \alpha, \xi) = 2.5+2\lambda_ \alpha (1-i)i + i\xi^2 $

    Figure 18.  Value function (left) and optimal control (right) when $ f(i, \alpha, \xi) = 2+ \alpha i+ \alpha i\xi $

  • [1] R. Bellman, On the theory of dynamic programming, Proceedings of the National Academy of Sciences, 38 (1952), 716-719.  doi: 10.1073/pnas.38.8.716.
    [2] R. Bellman, Dynamic programming and a new formalism in the calculus of variations, Proceedings of the National Academy of Sciences, 40 (1954), 231-235.  doi: 10.1073/pnas.40.4.231.
    [3] R. BellmanDynamic Programming, Princeton Univ. Press, 1957. 
    [4] R. BellmanAdaptive Control Processes: A Guided Tour, Princeton Univ. Press, 1961. 
    [5] A. BensoussanS. HoeZ. Yan and G. Yin, Real options with competition and regime switching, Math. Financ., 27 (2017), 224-250.  doi: 10.1111/mafi.12085.
    [6] D. P. Bertsekas, Dynamic Programming and Optimal Control, Athena Scientific, Belmont, MA, 1995.
    [7] T. BuiX. ChengZ. Jin and G. Yin, Approximation of a class of non-zero-sum investment and reinsurance games for regime-switching jump-diffusion models, Nonlinear Anal.: Hybrid Sys., 32 (2019), 276-293.  doi: 10.1016/j.nahs.2019.01.002.
    [8] A. Cadenillas and R. Huamán-Aguilar, Explicit formula for the optimal government debt ceiling, Ann. Oper. Res., 247 (2016), 415-449.  doi: 10.1007/s10479-015-2052-9.
    [9] A. Cadenillas and R. Huamán-Aguilar, On the failure to reach the optimal government debt ceiling, Risks, 6 (2018), 138. 
    [10] X. ChenZ-Q. ChenK. Tran and G. Yin, Properties of switching jump diffusions: maximum principles and Harnack inequalities, Bernoulli, 25 (2019), 1045-1075.  doi: 10.3150/17-bej1012.
    [11] X. ChengZ. Jin and H. Yang, Optimal insurance strategies: A hybrid deep learning Markov chain approximation approach, ASTIN Bulletin, 50 (2020), 449-477.  doi: 10.1017/asb.2020.9.
    [12] T. E. Duncan, Probability Densities for Diffusion Processes With Applications to Nonlinear Filtering Theory and Detection Theory, Ph. D. Thesis, Stanford Univ., 1967.
    [13] G. Ferrari and N. Rodosthenous, Optimal control of debt-to-GDP ratio in an N-state regime switching economy, SIAM J. Control Optim., 58 (2020), 755-786.  doi: 10.1137/19M1245049.
    [14] W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, 1975.
    [15] W. Fleming and H. Soner, Controlled Markov Processes and Viscosity Solutions, 2$^{nd}$ edition, Springer-Verlag, New York, 2006.
    [16] J. HanA. Jentzen and E. Weinan, Solving high-dimensional partial differential equations using deep learning, Proc. National Acad. Sci., 115 (2018), 8505-8510.  doi: 10.1073/pnas.1718942115.
    [17] A. HeningK. TranT. Phan and G. Yin, Harvesting of interacting stochastic populations, J. Math. Biol., 79 (2019), 533-570.  doi: 10.1007/s00285-019-01368-x.
    [18] A. Hening and K. Tran, Harvesting and seeding of stochastic populations: analysis and numerical approximation, J. Math. Biol., 18 (2020), 65-112.  doi: 10.1007/s00285-020-01502-0.
    [19] B. Højgaard and M. Taksar, Controlling risk exposure and dividends payout schemes: insurance company example, Mathematical Finance, 9 (1999), 153-182.  doi: 10.1111/1467-9965.00066.
    [20] Z. Jin, K. Tran, and G. Yin, Numerical solutions of stochastic control problems: Markov chain approximation methods, in Handbook of Numerical Analysis: Numerical Control: Part A, Volume 23 (eds. E. Trélat and E. Zuazua), ISBN: 9780323850599, North Holland, 2022.
    [21] Z. Jin and G. Yin, Numerical methods for optimal dividend payment and investment strategies of markov-modulated jump diffusion models with regular and singular controls, J. Optim. Theory Appl., 159 (2013), 246-271.  doi: 10.1007/s10957-012-0263-7.
    [22] Z. JinH. Yang and G. Yin, Numerical methods for optimal dividend payment and investment strategies of regime-switching jump diffusion models with capital injections, Automatica, 49 (2013), 2317-2329.  doi: 10.1016/j.automatica.2013.04.043.
    [23] Z. JinH. Yang and G. Yin, A hybrid deep learning method for optimal insurance strategies: algorithms and convergence analysis, Insurance: Mathematics and Economics, 96 (2021), 262-275.  doi: 10.1016/j.insmatheco.2020.11.012.
    [24] Z. JinG. Yin and F. Wu, Optimal reinsurance strategies in regime-switching jump diffusion models: stochastic differential game formulation and numerical methods, Insurance: Math. Econom., 53 (2013), 733-746.  doi: 10.1016/j.insmatheco.2013.09.015.
    [25] Z. JinG. Yin and C. Zhu, Numerical solutions of optimal risk control and dividend optimization policies under a generalized singular control formulation, Automatica, 48 (2012), 1489-1501.  doi: 10.1016/j.automatica.2012.05.039.
    [26] R. E. Kalman, A new approach to linear filtering and prediction problems, J. Basic Eng., 82 (1960), 35-45. 
    [27] R. E. Kalman and R. S. Bucy, New results in linear filtering and prediction theory, J. Basic Eng., 83 (1961), 9-108. 
    [28] N. V. Krylov, On the rate of convergence of finite-difference approximations for Bellman's equations with variable coefficients, Probab. Theory Related Fields, 117 (2000), 1-16.  doi: 10.1007/s004400050264.
    [29] H. J. Kushner, Optimal stochastic control, IRE Trans. Automat. Control, 7 (1962).
    [30] H. J. Kushner, On the differential equations satisfied by conditional probability densities of Markov processes, with applications, J. SIAM Control Ser. A, 2 (1964), 106-119. 
    [31] H. J. Kushner and F. C. Schweppe, A maximum principle for stochastic control systems, J. Math. Anal. Appl., 8 (1964), 287-302.  doi: 10.1016/0022-247X(64)90070-8.
    [32] H. J. Kushner, Introduction to Stochastic Control, Holt, Rinehart and Winston, Inc., New York-Montreal, Que.-London, 1971.
    [33] H. J. KushnerProbability Methods for Approximations in Stochastic Control and for Elliptic Equations, Academic Press, New York, 1977. 
    [34] H. J. Kushner, Consistency issues for numerical methods for variance control, with applications to optimization in finance, IEEE Trans. Automat. Control, 44 (1999), 2283-2296.  doi: 10.1109/9.811211.
    [35] H. J. Kushner, Numerical approximations for stochastic differential games, SIAM J. Control Optim., 41 (2002), 457-486.  doi: 10.1137/S0363012901389457.
    [36] H. J. Kushner, A partial history of the early development of continuous-time nonlinear stochastic systems theory, Automatica, 50 (2014), 303-334.  doi: 10.1016/j.automatica.2013.10.013.
    [37] H. J. Kushner and P. Dupuis, Numerical Methods for Stochastic Control Problems in Continuous Time, 2$^{nd}$ edition, Springer-Verlag, New York, 2001. doi: 10.1007/978-1-4613-0007-6.
    [38] H. J. Kushner and L. F. Martins, Numerical methods for stochastic singular control problems, SIAM J. Control Optim., 29 (1991), 1443-1475.  doi: 10.1137/0329073.
    [39] Q. Lü and X. Zhang, Mathematical Control Theory for Stochastic Partial Differential Equations, Springer, Berlin, 2021.
    [40] X. Mao and  C. YuanStochastic Differential Equations with Markovian Switching, Imperial College Press, London, 2006.  doi: 10.1142/p473.
    [41] J.-L. Menaldi, Some estimates for finite difference approximations, SIAM J. Control Optim., 27 (1989), 579-607.  doi: 10.1137/0327031.
    [42] R. E. Mortensen, Maximum-likelihood recursive nonlinear filtering, J. Optim. Theory Appl., 2 (1968), 386-394.  doi: 10.1007/BF00925744.
    [43] D. H. Nguyen and G. Yin, Modeling and analysis of switching diffusion systems: past dependent switching with a countable state space, SIAM J. Control Optim., 54 (2016), 2450-2477.  doi: 10.1137/16M1059357.
    [44] S. L. NguyenT. HoangD. Nguyen and G. Yin, Milstein-type procedures for numerical solutions of stochastic differential equations with Markovian switching, SIAM J. Numer. Anal., 55 (2017), 953-979.  doi: 10.1137/16M1084730.
    [45] S. L. NguyenG. Yin and T. Hoang, On laws of large numbers for systems with mean-field interactions and Markovian switching, Stochastic Process. Appl., 130 (2020), 262-296.  doi: 10.1016/j.spa.2019.02.014.
    [46] E. Pardoux and S. G. Peng, Adapted solution of a backward stochastic differential equation, Systems Control Lett., 14 (1990), 55-61.  doi: 10.1016/0167-6911(90)90082-6.
    [47] S. G. Peng, A general stochastic maximum principle for optimal control problems, SIAM J. Control Optim., 28 (1990), 966-979.  doi: 10.1137/0328054.
    [48] H. Pham, Continuous-time Stochastic Control and Optimization with Financial Applications, Springer, 2009. doi: 10.1007/978-3-540-89500-8.
    [49] L. Pontryagin, V. Boltyanski, R. Gamkrelidze and E.F. Mischenko, Mathematical Theory of Optimal Processes, New York, Wiley, 1962.
    [50] Q. S. Song and G. Yin, Rates of convergence of numerical methods for controlled regime-switching diffusions with stopping times in the costs, SIAM J. Control Optim., 48 (2009), 1831-1857.  doi: 10.1137/070679843.
    [51] Q. S. SongG. Yin and Z. Zhang, Numerical method for controlled regime-switching diffusions and regime-switching jump diffusions, Automatica, 42 (2006), 1147-1157.  doi: 10.1016/j.automatica.2006.03.016.
    [52] Q. S. SongG. Yin and Z. Zhang, Numerical solutions for stochastic differential games with regime switching, IEEE Trans. Automat. Control, 53 (2008), 509-521.  doi: 10.1109/TAC.2007.915169.
    [53] K. Tran and G. Yin, Numerical methods for optimal harvesting strategies in random environments under partial observations, Automatica, 70 (2016), 74-85.  doi: 10.1016/j.automatica.2016.03.025.
    [54] K. Tran and G. Yin, Optimal harvesting strategies for stochastic ecosystems, IET Control Theory & Applications, 11 (2017), 2521-2530.  doi: 10.1049/iet-cta.2016.1621.
    [55] K. Tran and G. Yin, Optimal control and numerical methods for hybrid stochastic SIS models, Nonlinear Anal. Hybrid Syst., 41 (2021), 101051.  doi: 10.1016/j.nahs.2021.101051.
    [56] K. Tran, L. T. N. Bich and G. Yin, Harvesting of a stochastic population under a mixed regular-singular control formulation, preprint, arXiv: 2110.14418.
    [57] T. D. TuongD. H. NguyenN. T. Dieu and K. Tran, Extinction and permanence in a stochastic SIRS model in regime-switching with general incidence rate, Nonlinear Anal. Hybrid Syst., 34 (2019), 121-130.  doi: 10.1016/j.nahs.2019.05.008.
    [58] W. M. Wonham, Some applications of stochastic differential equations to optimal nonlinear filtering, J. SIAM Control Ser. A, 2 (1965), 347-369. 
    [59] G. Yin and C. Zhu, Hybrid Switching Diffusions: Properties and Applications, Springer, New York, 2010. doi: 10.1007/978-1-4419-1105-6.
    [60] G. YinX. MaoC. Yuan and D. Cao, Approximation methods for hybrid diffussion systems with state-dependent switching processes: numerical algorithms and existence and uniqueness of solutions, SIAM J. Math. Anal., 41 (2010), 2335-2352.  doi: 10.1137/080727191.
    [61] G. Yin and F. Xi, Stability of regime-switching jump diffusions, SIAM J. Control Optim., 48 (2010), 4525-4549.  doi: 10.1137/080738301.
    [62] G. Yin and Q. Zhang, Continuous-Time Markov Chains and Applications: A Two-Time-Scale Approach, 2$^{nd}$ edition, Springer, New York, 2013. doi: 10.1007/978-1-4614-4346-9.
    [63] J. Yong and X. Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, Springer-Verlag, New York, 1999. doi: 10.1007/978-1-4612-1466-3.
    [64] M. Zakai, On the optimal filtering of diffusion processes, Z. Wahrsch. Verw. Gebiete, 11 (1969), 230-243.  doi: 10.1007/BF00536382.
  • 加载中

Figures(18)

SHARE

Article Metrics

HTML views(3168) PDF downloads(501) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return