Citation: |
[1] |
Antonelli, F.: Backward-forward stochastic differential equations. Ann. Appl. Probab. 3, 777–793 (1993) |
[2] |
Arora, R., Basu, A., Mianjy, P., Mukherjee, A.: Understanding deep neural networks with rectified linear units (2018). In: Proceedings of the International Conference on Learning Representations (ICLR).https://openreview.net/forum?id=B1JrgWRW |
[3] |
Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans Inf.Theory. 39(3), 930–945 (1993) |
[4] |
Beck, C., E, W., Jentzen, A.: Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations(2017). arXiv preprint arXiv:170905963 |
[5] |
Bellman, R.E.: Dynamic Programming. Princeton University Press, USA (1957) |
[6] |
Bender, C., Steiner, J.: Least-squares Monte Carlo for backward SDEs. In: Carmona, R., Del Moral, P., Hu, P., Oudjane, N. (eds.), pp. 257–289. Numerical Methods in Finance. Springer Proceedings in Mathematics, vol 12. Springer, Berlin (2012) |
[7] |
Bender, C., Zhang, J.: Time discretization and Markovian iteration for coupled FBSDEs. Ann. Appl.Probab. 18(1), 143–177 (2008) |
[8] |
Berner, J., Grohs, P., Jentzen, A.: Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations (2018). arXiv preprint arXiv:180903062 |
[9] |
Bölcskei, H., Grohs, P., Kutyniok, G., Petersen, P.: Optimal approximation with sparsely connected deep neural networks (2017). arXiv preprint arXiv:170501714 |
[10] |
Bouchard, B., Ekeland, I., Touzi, N.: On the Malliavin approach to Monte Carlo approximation of conditional expectations. Finance Stoch. 8(1), 45–71 (2004) |
[11] |
Bouchard, B., Touzi, N.: Discrete-time approximation and Monte-Carlo simulation of backward stochastic differential equations. Stoch. Process. Appl. 111(2), 175–206 (2004) |
[12] |
Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of Control. Signal.Syst. 2(4), 303–314 (1989) |
[13] |
E, W., Han, J., Jentzen, A.: Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Commun. Math. Stat. 5(4), 349– 380 (2017) |
[14] |
E, W., Hutzenthaler, M., Jentzen, A., Kruse, T.: On multilevel Picard numerical approximations for high-dimensional nonlinear parabolic partial differential equations and high-dimensional nonlinear backward stochastic differential equations. J. Sci. Comput. 79(3), 1534–1571 (2019) |
[15] |
Funahashi, K.I.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2(3), 183–192 (1989) |
[16] |
Grohs, P., Hornung, F., Jentzen, A., von Wurstemberger, P.: A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations (2018). arXiv preprint arXiv:180902362 |
[17] |
Han, J., Hu, R.: Deep fictitious play for finding Markovian Nash equilibrium in multi-agent games (2019).arXiv preprint arXiv:191201809 |
[18] |
Han, J., Jentzen, A., E, W.: Solving high-dimensional partial differential equations using deep learning.Proc. Natl. Acad. Sci. 115(34), 8505–8510 (2018) |
[19] |
Han, J., Lu, J., Zhou, M.: Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach (2020). arXiv preprint arXiv:200202600 |
[20] |
Henry-Labordere, P.: Counterparty risk valuation: A marked branching diffusion approach (2012).Available at SSRN 1995503. https://arxiv.org/abs/1203.2369 |
[21] |
Henry-Labordere, P., Oudjane, N., Tan, X., Touzi, N., Warin, X., et al.: Branching diffusion representation of semilinear PDEs and Monte Carlo approximation, pp. 184–210. In: Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, Institut Henri Poincaré, vol. 55. Institut Henri Poincaré, Paris (2019). pp. 184–210. https://projecteuclid.org/euclid.aihp/1547802399 |
[22] |
Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators.Neural Netw. 2(5), 359–366 (1989) |
[23] |
Huijskens, T., Ruijter, M., Oosterlee, C.: Efficient numerical Fourier methods for coupled forward–backward SDEs. J. Comput. Appl. Math. 296, 593–612 (2016) |
[24] |
Hutzenthaler, M., Jentzen, A., Kruse, T., et al.: Multilevel Picard iterations for solving smooth semilinear parabolic heat equations (2016). arXiv preprint arXiv:160703295 |
[25] |
Hutzenthaler, M., Jentzen, A., Kruse, T., Nguyen, T.A.: A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations (2020).arXiv preprint arXiv:190110854 |
[26] |
Hutzenthaler, M., Jentzen, A., Kruse, T., Nguyen, T.A., von Wurstemberger, P.: Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations(2018). arXiv preprint arXiv:180701212 |
[27] |
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, Vol. 37. JMLR.org, Lille (2015). pp. 448–456 |
[28] |
Jentzen, A., Salimova, D., Welti, T.: A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients (2018). arXiv preprint arXiv:180907321 |
[29] |
Kingma, D., Ba, J.: Adam: a method for stochastic optimization (2015). In: Proceedings of the International Conference on Learning Representations (ICLR) |
[30] |
Liang, S., Srikant, R.: Why deep neural networks for function approximation? (2017). In: Proceedings of the International Conference on Learning Representations (ICLR) |
[31] |
Ma, J., Protter, P., Yong, J.: Solving forward-backward stochastic differential equations explicitly–a four step scheme. Probab. Theory Relat. Fields. 98(3), 339–359 (1994) |
[32] |
Ma, J., Yong, J.: Forward-Backward Stochastic Differential Equations and their Applications. Springer, Berlin Heidelberg (2007) |
[33] |
Mhaskar, H.N., Poggio, T.: Deep vs. shallow networks: An approximation theory perspective. Anal. Appl. 14(06), 829–848 (2016) |
[34] |
Milstein, G., Tretyakov, M.: Numerical algorithms for forward-backward stochastic differential equations.SIAM J. Sci. Comput. 28(2), 561–582 (2006) |
[35] |
Pardoux, E., Peng, S.: Backward stochastic differential equations and quasilinear parabolic partial differential equations, pp. 200–217. Springer, Berlin (1992) |
[36] |
Pardoux, E., Tang, S.: Forward-backward stochastic differential equations and quasilinear parabolic PDEs.Prob. Theory Relat. Fields. 114(2), 123–150 (1999) |
[37] |
Zhang, J.: A numerical scheme for BSDEs. Ann. Appl. Prob. 14(1), 459–488 (2004) |