September 2017, 22(7): 2731-2761. doi: 10.3934/dcdsb.2017133

Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory

Department of Mathematics, Center for Complex Biological Systems, University of California, Irvine, CA 92697, USA

* Corresponding author

Received  August 2016 Revised  December 2016 Published  April 2017

Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.

Citation: Christopher Rackauckas, Qing Nie. Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2731-2761. doi: 10.3934/dcdsb.2017133
References:
[1]

R. L. Burden and J. D. Faires, Numerical Analysis 9th ed. , Brooks/Cole, Cengage Learning, Boston, MA, 2011.

[2]

K. Burrage and P. Burrage, General order conditions for stochastic runge-kutta methods for both commuting and non-commuting stochastic ordinary differential equation systems, Appl. Numer. Math., 28 (1998), 161-177. doi: 10.1016/S0168-9274(98)00042-7.

[3]

K. Burrage and P. M. Burrage, High strong order explicit runge-kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics, 22 (1996), 81-101. doi: 10.1016/S0168-9274(96)00027-X.

[4]

P. M. Burrage, Runge-Kutta Methods for Stochastic Differential Equations Thesis, The University of Queensland Brisbane, 1999.

[5]

P. M. Burrage and K. Burrage, A variable stepsize implementation for stochastic differential equations, SIAM Journal on Scientific Computing, 24 (2002), 848-864. doi: 10.1137/S1064827500376922.

[6]

J. R. Cash and A. H. Karp, A variable order runge-kutta method for initial value problems with rapidly varying right-hand sides, ACM Transactions on Mathematical Software, 16 (1990), 201-222. doi: 10.1145/79505.79507.

[7]

F. Ceschino, Modification de la longueur du pas dans l'integration numerique parles methodes a pas lies, Chiffres, 4 (1961), 101-106.

[8]

J. R. Dormand and P. J. Prince, A family of embedded runge kutta formulae, Journal of Computational and Applied Mathematics, 6 (1980), 19-26. doi: 10.1016/0771-050X(80)90013-3.

[9]

J. G. Gaines and T. J. Lyons, Variable step size control in the numerical solution of stochastic differential equations, SIAM Journal on Applied Mathematics, 57 (1997), 1455-1484. doi: 10.1137/S0036139995286515.

[10]

A. Ghasemi and S. Zahediasl, Normality tests for statistical analysis: A guide for non-statisticians, Int J Endocrinol Metab, 10 (2012), 486-489. doi: 10.5812/ijem.3505.

[11]

E. Hairer, S. P. N orsett and G. Wanner, Solving Ordinary Differential Equations I 2nd rev. edn, Springer series in computational mathematics. Springer-Verlag, Berlin, New York, 1993.

[12]

T. Hong, K. Watanabe, C. H. Ta, A. Villarreal-Ponce, Q. Nie and X. Dai, An ovol2-zeb1 mutual inhibitory circuit governs bidirectional and multi-step transition between epithelial and mesenchymal states PLoS Comput Biol 11 (2015), e1004569. doi: 10.1371/journal. pcbi. 1004569.

[13]

S. M. Iacus, Simulation and Inference for Stochastic Differential Equations: With R Examples (Springer Series in Statistics), Springer Publishing Company, Incorporated, 2008.

[14]

J. Kaneko, Explicit order 1.5 runge kutta scheme for solutions of ito stochastic differential equations, S/urikaisekikenky/usho K/oky/uroku, 932 (1995), 46-60.

[15]

P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations Springer Berlin Heidelberg, 1992. doi: 10.1007/978-3-662-12616-5.

[16]

H. Lamba, An adaptive timestepping algorithm for stochastic differential equations, Journal of Computational and Applied Mathematics, 161 (2003), 417-430. doi: 10.1016/j.cam.2003.05.001.

[17]

T. Liggett, Continuous Time Markov Processes: An Introduction American Mathematical Society, 2010. doi: 10.1090/gsm/113.

[18]

X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching Imperial College Press, London, 2006. doi: 10.1142/p473.

[19]

N. HofmannT. Muller-Gronbach and K. Ritter, Optimal approximation of stochastic differential equations by adaptive step-size control, Mathematics of Computation, 69 (2000), 1017-1034. doi: 10.1090/S0025-5718-99-01177-1.

[20]

B. Oksendal, Stochastic Differential Equations: An Introduction with Applications Springer, 2003. doi: 10.1007/978-3-642-14394-6.

[21]

W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes 3rd Edition: The Art of Scientific Computing, Cambridge University Press, 2007.

[22]

A. Rößler, Runge kutta methods for the strong approximation of solutions of stochastic differential equations, SIAM Journal on Numerical Analysis, 48 (2010), 922-952. doi: 10.1137/09076636X.

[23]

T. Ryden and M. Wiktorsson, On the simulation of iterated ito integrals, Stochastic Processes and their Applications, 91 (2001), 151-168. doi: 10.1016/S0304-4149(00)00053-3.

[24]

L. F. Shampine, Some practical runge-kutta formulas, Mathematics of Computation, 46 (1986), 135-150. doi: 10.1090/S0025-5718-1986-0815836-3.

[25]

M. Wiktorsson, Joint characteristic function and simultaneous simulation of iterated ito integrals for multiple independent brownian motions, The Annals of Applied Probability, 11 (2001), 470-487. doi: 10.1214/aoap/1015345301.

show all references

References:
[1]

R. L. Burden and J. D. Faires, Numerical Analysis 9th ed. , Brooks/Cole, Cengage Learning, Boston, MA, 2011.

[2]

K. Burrage and P. Burrage, General order conditions for stochastic runge-kutta methods for both commuting and non-commuting stochastic ordinary differential equation systems, Appl. Numer. Math., 28 (1998), 161-177. doi: 10.1016/S0168-9274(98)00042-7.

[3]

K. Burrage and P. M. Burrage, High strong order explicit runge-kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics, 22 (1996), 81-101. doi: 10.1016/S0168-9274(96)00027-X.

[4]

P. M. Burrage, Runge-Kutta Methods for Stochastic Differential Equations Thesis, The University of Queensland Brisbane, 1999.

[5]

P. M. Burrage and K. Burrage, A variable stepsize implementation for stochastic differential equations, SIAM Journal on Scientific Computing, 24 (2002), 848-864. doi: 10.1137/S1064827500376922.

[6]

J. R. Cash and A. H. Karp, A variable order runge-kutta method for initial value problems with rapidly varying right-hand sides, ACM Transactions on Mathematical Software, 16 (1990), 201-222. doi: 10.1145/79505.79507.

[7]

F. Ceschino, Modification de la longueur du pas dans l'integration numerique parles methodes a pas lies, Chiffres, 4 (1961), 101-106.

[8]

J. R. Dormand and P. J. Prince, A family of embedded runge kutta formulae, Journal of Computational and Applied Mathematics, 6 (1980), 19-26. doi: 10.1016/0771-050X(80)90013-3.

[9]

J. G. Gaines and T. J. Lyons, Variable step size control in the numerical solution of stochastic differential equations, SIAM Journal on Applied Mathematics, 57 (1997), 1455-1484. doi: 10.1137/S0036139995286515.

[10]

A. Ghasemi and S. Zahediasl, Normality tests for statistical analysis: A guide for non-statisticians, Int J Endocrinol Metab, 10 (2012), 486-489. doi: 10.5812/ijem.3505.

[11]

E. Hairer, S. P. N orsett and G. Wanner, Solving Ordinary Differential Equations I 2nd rev. edn, Springer series in computational mathematics. Springer-Verlag, Berlin, New York, 1993.

[12]

T. Hong, K. Watanabe, C. H. Ta, A. Villarreal-Ponce, Q. Nie and X. Dai, An ovol2-zeb1 mutual inhibitory circuit governs bidirectional and multi-step transition between epithelial and mesenchymal states PLoS Comput Biol 11 (2015), e1004569. doi: 10.1371/journal. pcbi. 1004569.

[13]

S. M. Iacus, Simulation and Inference for Stochastic Differential Equations: With R Examples (Springer Series in Statistics), Springer Publishing Company, Incorporated, 2008.

[14]

J. Kaneko, Explicit order 1.5 runge kutta scheme for solutions of ito stochastic differential equations, S/urikaisekikenky/usho K/oky/uroku, 932 (1995), 46-60.

[15]

P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations Springer Berlin Heidelberg, 1992. doi: 10.1007/978-3-662-12616-5.

[16]

H. Lamba, An adaptive timestepping algorithm for stochastic differential equations, Journal of Computational and Applied Mathematics, 161 (2003), 417-430. doi: 10.1016/j.cam.2003.05.001.

[17]

T. Liggett, Continuous Time Markov Processes: An Introduction American Mathematical Society, 2010. doi: 10.1090/gsm/113.

[18]

X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching Imperial College Press, London, 2006. doi: 10.1142/p473.

[19]

N. HofmannT. Muller-Gronbach and K. Ritter, Optimal approximation of stochastic differential equations by adaptive step-size control, Mathematics of Computation, 69 (2000), 1017-1034. doi: 10.1090/S0025-5718-99-01177-1.

[20]

B. Oksendal, Stochastic Differential Equations: An Introduction with Applications Springer, 2003. doi: 10.1007/978-3-642-14394-6.

[21]

W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes 3rd Edition: The Art of Scientific Computing, Cambridge University Press, 2007.

[22]

A. Rößler, Runge kutta methods for the strong approximation of solutions of stochastic differential equations, SIAM Journal on Numerical Analysis, 48 (2010), 922-952. doi: 10.1137/09076636X.

[23]

T. Ryden and M. Wiktorsson, On the simulation of iterated ito integrals, Stochastic Processes and their Applications, 91 (2001), 151-168. doi: 10.1016/S0304-4149(00)00053-3.

[24]

L. F. Shampine, Some practical runge-kutta formulas, Mathematics of Computation, 46 (1986), 135-150. doi: 10.1090/S0025-5718-1986-0815836-3.

[25]

M. Wiktorsson, Joint characteristic function and simultaneous simulation of iterated ito integrals for multiple independent brownian motions, The Annals of Applied Probability, 11 (2001), 470-487. doi: 10.1214/aoap/1015345301.

Figure 1.  Outline of the adaptive SODE algorithm based on Rejection Sampling with Memory. This depicts the general schema for the Rejection Sampling with Memory algorithm (RSwM). The green arrow depicts the path taken when the step is rejected, whereas the red arrow depicts the path that is taken when the step is accepted. The blue text denotes steps which are specific to the stochastic systems in order to ensure correct sampling properties which are developed in section 4.
Figure 2.  Adaptive algorithm Kolmogorov-Smirnov Tests. Equation 31 was solved from $t=0$ to $t=2$. Scatter plots of the p-values from Kolmogorov-Smirnov tests against the normal distribution. At the end of each run, a Kolmogorov-Smirnov test was performed on the values at end of the Brownian path for 200 simulations. The $x$-axis is the absolute tolerance (with relative tolerance set to zero) and the $y$-axis is the p-value of the Kolmogorov Smirnov tests.
Figure 3.  Adaptive algorithm correctness checks. QQplots of the distribution of the Brownian path at the end time $T=2$ over 10,000 paths. The $x$-axis is the quantiles of the standard normal distribution while the $y$-axis is the estimated quantiles for the distribution of $W_2 / \sqrt{2}$. Each row is for a example equation for Examples 1-3 respectively, and each column is for the algorithm RSwM1-3 respectively. $\epsilon=10^{-4}$. The red dashed line represents $x=y$, meaning the quantiles of a 10,000 standard normal random variables equate with the quantiles of the sample. The blue circles represent the quantile estimates for $W(2)/\sqrt{2}$ which should be distributed as a standard normal.
Figure 4.  Adaptive Algorithm Error Comparison on Equation 31, Equation 33, and Equation 35. Comparison of the rejection sampling with memory algorithms on examples 1-3. Along the $x$-axis is $\epsilon$ which is the user chosen local error tolerance. The $y$-axis is the average $l^{2}$ error along the path. These values are taken as the mean for 100 runs. Both axes are log-scaled.
Figure 5.  Adaptive Algorithm Timing Comparison. Comparison of the rejection sampling with memory algorithms. Along the $x$-axis is $\epsilon$ which is the user chosen local error tolerance. In the $y$-axis time is plotted (in seconds). The values are the elapsed time for a 1000 Both axes are log-scaled.
Figure 6.  Solution to the Lorenz system Equation 39 on $t\in[0,10]$ with additive noise and parameters $\alpha=10$, $\rho=28$, $\sigma =3$, and $\beta=8/3$ at varying tolerances. The system was solved using ESRK+RSwM3 with the relative tolerance fixed at zero and varying absolute tolerances.
Figure 7.  Stochastic cell differentiation model solutions. (A) Timeseries of the concentration of $[Ecad]$. The solution is plotted once every 100 accepted steps due to memory limitations. (B) Timeseries of the concentration of $[Vim]$. The solution is plotted once every 100 accepted steps due to memory limitations. (C) Accepted $h$ over time. The $h$ values were taken every 100 accepted steps due to memory limitations. (D) Elapsed time of the Euler-Maruyama and ESRK+RSwM3 algorithms on the stochastic cell model. Each algorithm was used to solve the model on $t\in[0,1]$ 10,000 times. The elapsed time for the fixed timestep methods for given $h$'s are shown as the filled lines, while the dashed line is the elapsed time for the adaptive method. The red circles denote the minimal $h$ and times for which the method did not show numerical instability in the ensemble.
Table 1.  ESRK1. Table (a) shows the legend for how the numbers in in Table (b) correspond to the coefficient arrays/matrices $c^{(i)}, A^{(i)}, B^{(i)}, \alpha, \beta^{(i)}, \tilde{\alpha}$, and $\tilde{\beta^{(i)}}$. For example, these tables show that $\alpha^T = (\frac{1}{3},\frac{2}{3},0,0)$. Note that the matrices $A^{(i)}$ and $B^{(i)}$ are lower triangular since the method is explicit.
Table6 
Algorithm 1 RSwM1
1: Set the values $\epsilon$, $h_{max}$, $T$
2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$
3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$
4: While $t<T$ do
5:     Attempt a step with $h$, $\Delta W$, $\Delta Z$ to calculate $X_{temp}$ according to (2)
6:     Calculate E according to (9)
7:     Update $q$ using (21)
8:     If $(q<1)$ then            ▷% Reject the Step
9:         Take $\Delta\tilde{W}\sim N\left(q\Delta W,(1-q)qh\right)$ and $\Delta\tilde{Z}\sim N\left(q\Delta Z,(1-q)qh\right)$
10:         Calculate $\overline{\Delta W}=\Delta W-\Delta\tilde{W}$ and $\overline{\Delta Z}=\Delta Z-\Delta\tilde{Z}$
11:         Push $\left(\left(1-q\right)h,\overline{\Delta W},\overline{\Delta Z}\right)$ into stack $S$
12:         Update $h := qh$
13:         Update $\Delta W := \Delta \tilde{W}$, $\Delta Z := \Delta \tilde{Z}$
14:     else            ▷% Accept the Step
15:         Update $t:=t+h$, $W:=W+\Delta W$, $Z:=Z+\Delta Z$, $X = X + X_{temp}$
16:         if ($S$ is empty) then
17:             Update $c:=\min(h_{max},qh)$, $h:=\min(c,T-t)$
18:             Take $\Delta W, \Delta Z \sim N(0,h)$
19:         else
20:             Pop the top of $S$ as $L$
21:             Update $h:=L_1$, $\Delta W := L_2$, $\Delta Z := L_3$
22:         end if
23:     end if
24: end while
Algorithm 1 RSwM1
1: Set the values $\epsilon$, $h_{max}$, $T$
2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$
3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$
4: While $t<T$ do
5:     Attempt a step with $h$, $\Delta W$, $\Delta Z$ to calculate $X_{temp}$ according to (2)
6:     Calculate E according to (9)
7:     Update $q$ using (21)
8:     If $(q<1)$ then            ▷% Reject the Step
9:         Take $\Delta\tilde{W}\sim N\left(q\Delta W,(1-q)qh\right)$ and $\Delta\tilde{Z}\sim N\left(q\Delta Z,(1-q)qh\right)$
10:         Calculate $\overline{\Delta W}=\Delta W-\Delta\tilde{W}$ and $\overline{\Delta Z}=\Delta Z-\Delta\tilde{Z}$
11:         Push $\left(\left(1-q\right)h,\overline{\Delta W},\overline{\Delta Z}\right)$ into stack $S$
12:         Update $h := qh$
13:         Update $\Delta W := \Delta \tilde{W}$, $\Delta Z := \Delta \tilde{Z}$
14:     else            ▷% Accept the Step
15:         Update $t:=t+h$, $W:=W+\Delta W$, $Z:=Z+\Delta Z$, $X = X + X_{temp}$
16:         if ($S$ is empty) then
17:             Update $c:=\min(h_{max},qh)$, $h:=\min(c,T-t)$
18:             Take $\Delta W, \Delta Z \sim N(0,h)$
19:         else
20:             Pop the top of $S$ as $L$
21:             Update $h:=L_1$, $\Delta W := L_2$, $\Delta Z := L_3$
22:         end if
23:     end if
24: end while
Table7 
Algorithm 2 RSwM3
1: Set the values $\epsilon$, $h_{max}$, $T$
2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$
3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$
4: while $t<T$ do
5:     Attempt a step with $h$, $\Delta W$, $\Delta Z$ to calculate $X_{temp}$ according to (2)
6:     Calculate E according to (9)
7:     Update $q$ using (21)
8:     if $(q<1)$ then            ▷% Reject the Step
9:         Set $h_{s}=0$, $\Delta W =0$, $\Delta Z =0$
10:         while $S_2$ is not empty do
11:             Pop the top of $S_2$ as $L$
12:             if $h_{s} + L_{1}<(1-q)h$ then
13:                 Update $h_{s} := h_{s} +L_{1}$
14:                 Update $\Delta W_{tmp} := \Delta W_{tmp} + L_{2}$, $\Delta Z_{tmp} := \Delta Z_{tmp} + L_{3}$
15:             else
16:                 Push $L$ onto $S_2$ and break
17:             end if
18:         end while
19:         Set $h_{K} = h - h_s$, $K_{2} = \Delta W - \Delta W_{tmp}$, $K_{3} = \Delta Z - \Delta Z_{tmp}$
20:         Set $q_{K} = \frac{qh}{h_K}$
21:         Take $\Delta \tilde{W} \sim N(q_K K_2,(1-q_K)q_K L_1)$
22:         Take $\Delta \tilde{Z} \sim N(q_K K_3,(1-q_K)q_K L_1)$
23:         Pop $((1-q_K)h_{K},K_2 - \Delta \tilde{W}, K_3 - \Delta \tilde{Z})$ onto $S_1$
24:         Pop $(q_K L_1, \Delta \tilde{W}, \Delta \tilde{Z})$ onto $S_2$
25:         Update $\Delta W = \Delta \tilde{W}$, $\Delta Z = \Delta \tilde{Z}$, $h = qh$
26:     else            ▷% Accept the Step
27:         Update $t:=t+h$, $W:=W+\Delta W$, $Z:=Z+\Delta Z$, $X = X + X_{temp}$
28:         Empty $S_2$
29:         Update $c:=\min(h_{max},qh)$, $h:=\min(c,T-t)$
30:         Set $h_{s}=0$, $\Delta W =0$, $\Delta Z =0$
31:         while $S$ is not empty do
32:             Pop the top of $S_1$ as $L$
33:             if ($h_s + L_1 < h$) then            ▷% Temporary not far enough
34:                 Update $h_{s} := h_{s} + L_{1}$, $\Delta W := \Delta W + L_{2}$, $\Delta Z := \Delta Z + L_{3}$
35:                 Push a copy of $L$ onto $S_2$
36:             else            ▷% Final part of step from stack
37:                 Set $q_{tmp}=\frac{h-h_{s}}{L_{1}}$
38:                 Let $\Delta \tilde{W} \sim N(q_{tmp}L_{2},(1-q_{tmp})q_{tmp}L_1)$
39:                 Let $\Delta \tilde{Z} \sim N(q_{tmp}L_{3},(1-q_{tmp})q_{tmp}L_1)$
40:                 Push $((1-q_{tmp})L_1,L_2 - \Delta \tilde{W}, L_3 - \Delta \tilde{Z})$ onto $S_1$
41:                 Update $\Delta W := \Delta W + \Delta \tilde{W}$, $\Delta Z := \Delta Z + \Delta \tilde{Z}$
42:             end if
43:         end while
                                ▷% Update for last portion to step. Note zero if final part is from stack
44:         if ($h-h_s$ is not zero) then
45:             Let $\eta_W, \eta_{Z} \sim N(0,h-h_s)$
46:             Update $\Delta W = \Delta W + \eta_{W}$, $\Delta Z = \Delta Z + \eta_{Z}$
47:             Push $(h-h_s,\eta_{W},\eta_{Z})$ onto $S_2$
48:         end if
49:     end if
50: end while
Algorithm 2 RSwM3
1: Set the values $\epsilon$, $h_{max}$, $T$
2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$
3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$
4: while $t<T$ do
5:     Attempt a step with $h$, $\Delta W$, $\Delta Z$ to calculate $X_{temp}$ according to (2)
6:     Calculate E according to (9)
7:     Update $q$ using (21)
8:     if $(q<1)$ then            ▷% Reject the Step
9:         Set $h_{s}=0$, $\Delta W =0$, $\Delta Z =0$
10:         while $S_2$ is not empty do
11:             Pop the top of $S_2$ as $L$
12:             if $h_{s} + L_{1}<(1-q)h$ then
13:                 Update $h_{s} := h_{s} +L_{1}$
14:                 Update $\Delta W_{tmp} := \Delta W_{tmp} + L_{2}$, $\Delta Z_{tmp} := \Delta Z_{tmp} + L_{3}$
15:             else
16:                 Push $L$ onto $S_2$ and break
17:             end if
18:         end while
19:         Set $h_{K} = h - h_s$, $K_{2} = \Delta W - \Delta W_{tmp}$, $K_{3} = \Delta Z - \Delta Z_{tmp}$
20:         Set $q_{K} = \frac{qh}{h_K}$
21:         Take $\Delta \tilde{W} \sim N(q_K K_2,(1-q_K)q_K L_1)$
22:         Take $\Delta \tilde{Z} \sim N(q_K K_3,(1-q_K)q_K L_1)$
23:         Pop $((1-q_K)h_{K},K_2 - \Delta \tilde{W}, K_3 - \Delta \tilde{Z})$ onto $S_1$
24:         Pop $(q_K L_1, \Delta \tilde{W}, \Delta \tilde{Z})$ onto $S_2$
25:         Update $\Delta W = \Delta \tilde{W}$, $\Delta Z = \Delta \tilde{Z}$, $h = qh$
26:     else            ▷% Accept the Step
27:         Update $t:=t+h$, $W:=W+\Delta W$, $Z:=Z+\Delta Z$, $X = X + X_{temp}$
28:         Empty $S_2$
29:         Update $c:=\min(h_{max},qh)$, $h:=\min(c,T-t)$
30:         Set $h_{s}=0$, $\Delta W =0$, $\Delta Z =0$
31:         while $S$ is not empty do
32:             Pop the top of $S_1$ as $L$
33:             if ($h_s + L_1 < h$) then            ▷% Temporary not far enough
34:                 Update $h_{s} := h_{s} + L_{1}$, $\Delta W := \Delta W + L_{2}$, $\Delta Z := \Delta Z + L_{3}$
35:                 Push a copy of $L$ onto $S_2$
36:             else            ▷% Final part of step from stack
37:                 Set $q_{tmp}=\frac{h-h_{s}}{L_{1}}$
38:                 Let $\Delta \tilde{W} \sim N(q_{tmp}L_{2},(1-q_{tmp})q_{tmp}L_1)$
39:                 Let $\Delta \tilde{Z} \sim N(q_{tmp}L_{3},(1-q_{tmp})q_{tmp}L_1)$
40:                 Push $((1-q_{tmp})L_1,L_2 - \Delta \tilde{W}, L_3 - \Delta \tilde{Z})$ onto $S_1$
41:                 Update $\Delta W := \Delta W + \Delta \tilde{W}$, $\Delta Z := \Delta Z + \Delta \tilde{Z}$
42:             end if
43:         end while
                                ▷% Update for last portion to step. Note zero if final part is from stack
44:         if ($h-h_s$ is not zero) then
45:             Let $\eta_W, \eta_{Z} \sim N(0,h-h_s)$
46:             Update $\Delta W = \Delta W + \eta_{W}$, $\Delta Z = \Delta Z + \eta_{Z}$
47:             Push $(h-h_s,\eta_{W},\eta_{Z})$ onto $S_2$
48:         end if
49:     end if
50: end while
Table 2.  Fixed timestep method fails and runtimes. The fixed timestep algorithms and ESRK+RSwM3 algorithms were used to solve the stochastic cell model on $t\in[0,1]$ 10,000 times. Failures were detected by checking if the solution contained any NaN values. During a run, if any NaNs were detected, the solver would instantly end the simulations and declare a failure. The runtime for the adaptive algorithm (with no failures) was 186.81 seconds.
Euler-Maruyama Runge-Kutta Milstein Rößler SRI
$\Delta t$ Fails (/10,000) Time (s) Fails (/10,000) Time (s) Fails (/10,000) Time (s)
$2^{-16}$ 137 133.35 131 211.92 78 609.27
$2^{-17}$ 39 269.09 26 428.28 17 1244.06
$2^{-18}$ 3 580.14 6 861.01 0 2491.37
$2^{-19}$ 1 1138.41 1 1727.91 0 4932.70
$2^{-20}$ 0 2286.35 0 3439.90 0 9827.16
$2^{-21}$ 0 4562.20 0 6891.35 0 19564.16
Euler-Maruyama Runge-Kutta Milstein Rößler SRI
$\Delta t$ Fails (/10,000) Time (s) Fails (/10,000) Time (s) Fails (/10,000) Time (s)
$2^{-16}$ 137 133.35 131 211.92 78 609.27
$2^{-17}$ 39 269.09 26 428.28 17 1244.06
$2^{-18}$ 3 580.14 6 861.01 0 2491.37
$2^{-19}$ 1 1138.41 1 1727.91 0 4932.70
$2^{-20}$ 0 2286.35 0 3439.90 0 9827.16
$2^{-21}$ 0 4562.20 0 6891.35 0 19564.16
Table 3.  $qmax$ determination tests. Equation 31, Equation 33, and Equation 35 were solved using the ESRK+RSwM3 algorithm with a relative tolerance of 0 and absolute tolerance of $2^{-14}$. The elapsed time to solve a Monte Carlo simulation of 100,000 simulations to $T=1$ was saved and the mean error at $T=1$ was calculated. The final column shows timing results for using ESRK+RSwM3 on the stochastic cell model from F solved with the same tolerance settings as in subsection 6.2 to solve a Monte Carlo simulation of 10,000 simulations.
Example 1 Example 2 Example 3 Cell Model
qmax Time (s) Error Time (s) Error Time (s) Error Time (s)
$1+2^{-5}$ 37.00 2.57e-8 60.87 2.27e-7 67.71 3.42e-9 229.83
$1+2^{-4}$ 34.73 2.82e-8 32.40 3.10e-7 66.68 3.43e-9 196.36
$1+2^{-3}$ 49.14 3.14e-8 132.33 8.85e-7 65.94 3.44e-9 186.81
$1+2^{-2}$ 39.33 3.59e-8 33.90 1.73e-6 66.33 3.44e-9 205.57
$1+2^{-1}$ 38.22 3.82e-8 159.94 2.58e-6 68.16 3.44e-9 249.77
$1+2^{0}$ 82.76 4.41e-8 34.41 3.58e-6 568.22 3.44e-9 337.99
$1+2^{1}$ 68.16 9.63e-8 33.98 6.06e-6 87.50 3.22e-9 418.78
$1+2^{2}$ 48.23 1.01e-7 33.97 9.74e-6 69.78 3.44e-9 571.59
Example 1 Example 2 Example 3 Cell Model
qmax Time (s) Error Time (s) Error Time (s) Error Time (s)
$1+2^{-5}$ 37.00 2.57e-8 60.87 2.27e-7 67.71 3.42e-9 229.83
$1+2^{-4}$ 34.73 2.82e-8 32.40 3.10e-7 66.68 3.43e-9 196.36
$1+2^{-3}$ 49.14 3.14e-8 132.33 8.85e-7 65.94 3.44e-9 186.81
$1+2^{-2}$ 39.33 3.59e-8 33.90 1.73e-6 66.33 3.44e-9 205.57
$1+2^{-1}$ 38.22 3.82e-8 159.94 2.58e-6 68.16 3.44e-9 249.77
$1+2^{0}$ 82.76 4.41e-8 34.41 3.58e-6 568.22 3.44e-9 337.99
$1+2^{1}$ 68.16 9.63e-8 33.98 6.06e-6 87.50 3.22e-9 418.78
$1+2^{2}$ 48.23 1.01e-7 33.97 9.74e-6 69.78 3.44e-9 571.59
Table8 
Algorithm 3 Initial $h$ Determination
1: Let $d_0 = \Vert X_0 \Vert$
2: Calculate $f_0 = f(X_0,t)$ and $\sigma_0 = 3g(X_0,t)$
3: Let $d_1 = \Vert \mathrm{max}(|f_0 +\sigma_0|,|f_0 -\sigma_0|)\Vert$
4: if $d_0 < 10^{-5}$ or $d_{1} < 10^{-5}$ then
5:     Let $h_0 = 10^{-6}$
6: else
7:     Let $h_0 = 0.01(d_0/d_1)$
8: end if
9: Calculate an Euler step: $X_1 = X_0 + h_0f_0$
10: Calculate new estimates: $f_1 = f(X_1,t)$ and $\sigma_0 = 3g(X_1,t)$
11: Determine $\sigma_1^M = \mathrm{max}(|\sigma_0 +\sigma_1|),|\sigma_0 -\sigma_1|)$
12: Let $d_2 = \Vert \mathrm{max}(|f_1-f_0 + \sigma_1^M|,|f_1-f_0 - \sigma_1^M|)\Vert/h_0$
13: if $\mathrm{max}(d_1,d_2)<10^{-15}$ then
14:     Let $h_1 = \mathrm{max}(10^{-6},10^{-3}h_0)$
15: else
16:     Let $h_1 = 10^{-(2+\log_10(\mathrm{max}(d_1,d_2))/(order+0.5)}$
17: end if
18: Let $h=\mathrm{min}(100h_0,h_1)$
Algorithm 3 Initial $h$ Determination
1: Let $d_0 = \Vert X_0 \Vert$
2: Calculate $f_0 = f(X_0,t)$ and $\sigma_0 = 3g(X_0,t)$
3: Let $d_1 = \Vert \mathrm{max}(|f_0 +\sigma_0|,|f_0 -\sigma_0|)\Vert$
4: if $d_0 < 10^{-5}$ or $d_{1} < 10^{-5}$ then
5:     Let $h_0 = 10^{-6}$
6: else
7:     Let $h_0 = 0.01(d_0/d_1)$
8: end if
9: Calculate an Euler step: $X_1 = X_0 + h_0f_0$
10: Calculate new estimates: $f_1 = f(X_1,t)$ and $\sigma_0 = 3g(X_1,t)$
11: Determine $\sigma_1^M = \mathrm{max}(|\sigma_0 +\sigma_1|),|\sigma_0 -\sigma_1|)$
12: Let $d_2 = \Vert \mathrm{max}(|f_1-f_0 + \sigma_1^M|,|f_1-f_0 - \sigma_1^M|)\Vert/h_0$
13: if $\mathrm{max}(d_1,d_2)<10^{-15}$ then
14:     Let $h_1 = \mathrm{max}(10^{-6},10^{-3}h_0)$
15: else
16:     Let $h_1 = 10^{-(2+\log_10(\mathrm{max}(d_1,d_2))/(order+0.5)}$
17: end if
18: Let $h=\mathrm{min}(100h_0,h_1)$
Table 4.  SRA1. Table (a) shows the legend for how the numbers in in Table (b) correspond to the coefficient arrays/matrices $c^{(i)}, A^{(i)}, B^{(i)}, \alpha,$ and $\beta^{(i)}$. Note that the matrices $A^{(i)}$ and $B^{(i)}$ are lower triangular since the method is explicit.
Table9 
Algorithm 4 RSwM2
1: Set the values $\epsilon$, $h_{max}$, $T$
2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$
3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$
4: while $t<T$ do
5:     Attempt a step with $h$, $\Delta W$, $\Delta Z$ to calculate $X_{temp}$ according to (2)
6:     Calculate E according to (9)
7:     Update $q$ using (21)
8:     if $(q<1)$ then            ▷% Reject the Step
9:         Take $\Delta\tilde{W}\sim N\left(q\Delta W,(1-q)qh\right)$ and $\Delta\tilde{Z}\sim N\left(q\Delta Z,(1-q)qh\right)$
10:         Calculate $\overline{\Delta W}=\Delta W-\Delta\tilde{W}$ and $\overline{\Delta Z}=\Delta Z-\Delta\tilde{Z}$
11:         Push $\left(\left(1-q\right)h,\overline{\Delta W},\overline{\Delta Z}\right)$ into stack $S$
12:         Update $h := qh$
13:         Update $\Delta W := \Delta \tilde{W}$, $\Delta Z := \Delta \tilde{Z}$
14:     else            ▷% Accept the Step
15:         Update $t:=t+h$, $W:=W+\Delta W$, $Z:=Z+\Delta Z$, $X = X + X_{temp}$
16:         Update $c:=\min(h_{max},qh)$, $h:=\min(c,T-t_n)$
17:         Set $h_{s}=0$, $\Delta W =0$, $\Delta Z =0$
18:         while $S$ is not empty do
19:             Pop the top of $S$ as $L$
20:             if ($h_s + L_1 < h$) then            ▷% Temporary not far enough
21:                 Update $h_{s} := h_{s} + L_{1}$, $\Delta W := \Delta W + L_{2}$, $\Delta Z := \Delta Z + L_{3}$
22:             else            ▷% Final part of step from stack
23:                 Set $q_{tmp}=\frac{h-h_{s}}{L_{1}}$
24:                 Let $\Delta \tilde{W} \sim N(q_{tmp}L_{2},(1-q_{tmp})q_{tmp}L_1)$
25:                 Let $\Delta \tilde{Z} \sim N(q_{tmp}L_{3},(1-q_{tmp})q_{tmp}L_1)$
26:                 Push $((1-q_{tmp})L_1,L_2 - \Delta \tilde{W}, L_3 - \Delta \tilde{Z})$ onto $S$
27:                 Update $\Delta W := \Delta W + \Delta \tilde{W}$, $\Delta Z := \Delta \tilde{Z}$
28:             end if
29:         end while
        ▷% Update for last portion to step. Note zero if final part is from stack
30:         if ($h-h_s$ is not zero) then
31:             Let $\eta_W, \eta_{Z} \sim N(0,h-h_s)$
32:             Update $\Delta W = \Delta W + \eta_{W}$, $\Delta Z = \Delta Z + \eta_{Z}$
33:         end if
34:     end if
35: end while
Algorithm 4 RSwM2
1: Set the values $\epsilon$, $h_{max}$, $T$
2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$
3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$
4: while $t<T$ do
5:     Attempt a step with $h$, $\Delta W$, $\Delta Z$ to calculate $X_{temp}$ according to (2)
6:     Calculate E according to (9)
7:     Update $q$ using (21)
8:     if $(q<1)$ then            ▷% Reject the Step
9:         Take $\Delta\tilde{W}\sim N\left(q\Delta W,(1-q)qh\right)$ and $\Delta\tilde{Z}\sim N\left(q\Delta Z,(1-q)qh\right)$
10:         Calculate $\overline{\Delta W}=\Delta W-\Delta\tilde{W}$ and $\overline{\Delta Z}=\Delta Z-\Delta\tilde{Z}$
11:         Push $\left(\left(1-q\right)h,\overline{\Delta W},\overline{\Delta Z}\right)$ into stack $S$
12:         Update $h := qh$
13:         Update $\Delta W := \Delta \tilde{W}$, $\Delta Z := \Delta \tilde{Z}$
14:     else            ▷% Accept the Step
15:         Update $t:=t+h$, $W:=W+\Delta W$, $Z:=Z+\Delta Z$, $X = X + X_{temp}$
16:         Update $c:=\min(h_{max},qh)$, $h:=\min(c,T-t_n)$
17:         Set $h_{s}=0$, $\Delta W =0$, $\Delta Z =0$
18:         while $S$ is not empty do
19:             Pop the top of $S$ as $L$
20:             if ($h_s + L_1 < h$) then            ▷% Temporary not far enough
21:                 Update $h_{s} := h_{s} + L_{1}$, $\Delta W := \Delta W + L_{2}$, $\Delta Z := \Delta Z + L_{3}$
22:             else            ▷% Final part of step from stack
23:                 Set $q_{tmp}=\frac{h-h_{s}}{L_{1}}$
24:                 Let $\Delta \tilde{W} \sim N(q_{tmp}L_{2},(1-q_{tmp})q_{tmp}L_1)$
25:                 Let $\Delta \tilde{Z} \sim N(q_{tmp}L_{3},(1-q_{tmp})q_{tmp}L_1)$
26:                 Push $((1-q_{tmp})L_1,L_2 - \Delta \tilde{W}, L_3 - \Delta \tilde{Z})$ onto $S$
27:                 Update $\Delta W := \Delta W + \Delta \tilde{W}$, $\Delta Z := \Delta \tilde{Z}$
28:             end if
29:         end while
        ▷% Update for last portion to step. Note zero if final part is from stack
30:         if ($h-h_s$ is not zero) then
31:             Let $\eta_W, \eta_{Z} \sim N(0,h-h_s)$
32:             Update $\Delta W = \Delta W + \eta_{W}$, $\Delta Z = \Delta Z + \eta_{Z}$
33:         end if
34:     end if
35: end while
Table 5.  Table of Parameter Values for the Stochastic Cell Model.
Parameter Value Parameter Value Parameter Value Parameter Value
$J1_{200}$ 3 $J1_{E}$ 0.1 $K_{2}$ 1 $k0_{O}$ 0.35
$J2_{200}$ 0.2 $J2_{E}$ 0.3 $K_{3}$ 1 $kO_{200}$ 0.0002
$J1_{34}$ 0.15 $J1_{V}$ 0.4 $K_{4}$ 1 $kO_{34}$ 0.001
$J2_{34}$ 0.35 $J2_{V}$ 0.4 $K_{5}$ 1 $kd_{snail}$ 0.09
$J_{O}$ 0.9 $J3_{V}$ 2 $K_{TR}$ 20 $kd_{tgf}$ 0.1
$J0_{snail}$ 0.6 $J1_{zeb}$ 3.5 $K_{SR}$ 100 $kd_{zeb}$ 0.1
$J1_{snail}$ 0.5 $J2_{zeb}$ 0.9 $TGF0$ 0 $kd_{TGF}$ 0.9
$J2_{snail}$ 1.8 $K_{1}$ 1 $Tk$ 1000 $kd_{ZEB}$ 1.66
$k0_{snail}$ 0.0005 $k0_{zeb}$ 0.003 $\lambda_{1}$ 0.5 $k0_{TGF}$ 1.1
$n1_{200}$ 3 $n1_{snail}$ 2 $\lambda_{2}$ 0.5 $k0_{E}$ 5
$n2_{200}$ 2 $n1_{E}$ 2 $\lambda_{3}$ 0.5 $k0_{V}$ 5
$n1_{34}$ 2 $n2_{E}$ 2 $\lambda_{4}$ 0.5 $k_{E1}$ 15
$n2_{34}$ 2 $n1_{V}$ 2 $\lambda_{5}$ 0.5 $k_{E2}$ 5
$n_{O}$ 2 $n2_{V}$ 2 $\lambda_{SR}$ 0.5 $k_{V1}$ 2
$n0_{snail}$ 2 $n2_{zeb}$ 6 $\lambda_{TR}$ 0.5 $k_{V2}$ 5
$k_{O}$ 1.2 $k_{200}$ 0.02 $k_{34}$ 0.01 $k_{tgf}$ 0.05
$k_{zeb}$ 0.06 $k_{TGF}$ 1.5 $k_{SNAIL}$ 16 $k_{ZEB}$ 16
$kd_{ZR_{1}}$ 0.5 $kd_{ZR_{2}}$ 0.5 $kd_{ZR_{3}}$ 0.5 $kd_{ZR_{4}}$ 0.5
$kd_{ZR_{5}}$ 0.5 $kd_{O}$ 1.0 $kd_{200}$ 0.035 $kd_{34}$ 0.035
$kd_{SR}$ 0.9 $kd_{E}$ 0.05 $kd_{V}$ 0.05
Parameter Value Parameter Value Parameter Value Parameter Value
$J1_{200}$ 3 $J1_{E}$ 0.1 $K_{2}$ 1 $k0_{O}$ 0.35
$J2_{200}$ 0.2 $J2_{E}$ 0.3 $K_{3}$ 1 $kO_{200}$ 0.0002
$J1_{34}$ 0.15 $J1_{V}$ 0.4 $K_{4}$ 1 $kO_{34}$ 0.001
$J2_{34}$ 0.35 $J2_{V}$ 0.4 $K_{5}$ 1 $kd_{snail}$ 0.09
$J_{O}$ 0.9 $J3_{V}$ 2 $K_{TR}$ 20 $kd_{tgf}$ 0.1
$J0_{snail}$ 0.6 $J1_{zeb}$ 3.5 $K_{SR}$ 100 $kd_{zeb}$ 0.1
$J1_{snail}$ 0.5 $J2_{zeb}$ 0.9 $TGF0$ 0 $kd_{TGF}$ 0.9
$J2_{snail}$ 1.8 $K_{1}$ 1 $Tk$ 1000 $kd_{ZEB}$ 1.66
$k0_{snail}$ 0.0005 $k0_{zeb}$ 0.003 $\lambda_{1}$ 0.5 $k0_{TGF}$ 1.1
$n1_{200}$ 3 $n1_{snail}$ 2 $\lambda_{2}$ 0.5 $k0_{E}$ 5
$n2_{200}$ 2 $n1_{E}$ 2 $\lambda_{3}$ 0.5 $k0_{V}$ 5
$n1_{34}$ 2 $n2_{E}$ 2 $\lambda_{4}$ 0.5 $k_{E1}$ 15
$n2_{34}$ 2 $n1_{V}$ 2 $\lambda_{5}$ 0.5 $k_{E2}$ 5
$n_{O}$ 2 $n2_{V}$ 2 $\lambda_{SR}$ 0.5 $k_{V1}$ 2
$n0_{snail}$ 2 $n2_{zeb}$ 6 $\lambda_{TR}$ 0.5 $k_{V2}$ 5
$k_{O}$ 1.2 $k_{200}$ 0.02 $k_{34}$ 0.01 $k_{tgf}$ 0.05
$k_{zeb}$ 0.06 $k_{TGF}$ 1.5 $k_{SNAIL}$ 16 $k_{ZEB}$ 16
$kd_{ZR_{1}}$ 0.5 $kd_{ZR_{2}}$ 0.5 $kd_{ZR_{3}}$ 0.5 $kd_{ZR_{4}}$ 0.5
$kd_{ZR_{5}}$ 0.5 $kd_{O}$ 1.0 $kd_{200}$ 0.035 $kd_{34}$ 0.035
$kd_{SR}$ 0.9 $kd_{E}$ 0.05 $kd_{V}$ 0.05
[1]

Sihong Shao, Huazhong Tang. Higher-order accurate Runge-Kutta discontinuous Galerkin methods for a nonlinear Dirac model. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 623-640. doi: 10.3934/dcdsb.2006.6.623

[2]

Da Xu. Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2389-2416. doi: 10.3934/dcdsb.2017122

[3]

Vladimir Kazakov. Sampling - reconstruction procedure with jitter of markov continuous processes formed by stochastic differential equations of the first order. Conference Publications, 2009, 2009 (Special) : 433-441. doi: 10.3934/proc.2009.2009.433

[4]

Nikolai Dokuchaev. On strong causal binomial approximation for stochastic processes. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1549-1562. doi: 10.3934/dcdsb.2014.19.1549

[5]

Wei Mao, Liangjian Hu, Xuerong Mao. Asymptotic boundedness and stability of solutions to hybrid stochastic differential equations with jumps and the Euler-Maruyama approximation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 587-613. doi: 10.3934/dcdsb.2018198

[6]

Ludwig Arnold, Igor Chueshov. Cooperative random and stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 1-33. doi: 10.3934/dcds.2001.7.1

[7]

Xin Chen, Ana Bela Cruzeiro. Stochastic geodesics and forward-backward stochastic differential equations on Lie groups. Conference Publications, 2013, 2013 (special) : 115-121. doi: 10.3934/proc.2013.2013.115

[8]

Zdzisław Brzeźniak, Paul André Razafimandimby. Irreducibility and strong Feller property for stochastic evolution equations in Banach spaces. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1051-1077. doi: 10.3934/dcdsb.2016.21.1051

[9]

Can Huang, Zhimin Zhang. The spectral collocation method for stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 667-679. doi: 10.3934/dcdsb.2013.18.667

[10]

Jasmina Djordjević, Svetlana Janković. Reflected backward stochastic differential equations with perturbations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1833-1848. doi: 10.3934/dcds.2018075

[11]

Arnulf Jentzen. Taylor expansions of solutions of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 515-557. doi: 10.3934/dcdsb.2010.14.515

[12]

Jan A. Van Casteren. On backward stochastic differential equations in infinite dimensions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 803-824. doi: 10.3934/dcdss.2013.6.803

[13]

Igor Chueshov, Michael Scheutzow. Invariance and monotonicity for stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1533-1554. doi: 10.3934/dcdsb.2013.18.1533

[14]

Yaozhong Hu, David Nualart, Xiaobin Sun, Yingchao Xie. Smoothness of density for stochastic differential equations with Markovian switching. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-17. doi: 10.3934/dcdsb.2018307

[15]

Mei Ju Luo, Yi Zeng Chen. Smoothing and sample average approximation methods for solving stochastic generalized Nash equilibrium problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 1-15. doi: 10.3934/jimo.2016.12.1

[16]

Ying Hu, Shanjian Tang. Switching game of backward stochastic differential equations and associated system of obliquely reflected backward stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5447-5465. doi: 10.3934/dcds.2015.35.5447

[17]

Lifeng Chen, Jifa Jiang. Stochastic epidemic models driven by stochastic algorithms with constant step. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 721-736. doi: 10.3934/dcdsb.2016.21.721

[18]

Lijin Wang, Jialin Hong. Generating functions for stochastic symplectic methods. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 1211-1228. doi: 10.3934/dcds.2014.34.1211

[19]

Yayun Zheng, Xu Sun. Governing equations for Probability densities of stochastic differential equations with discrete time delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3615-3628. doi: 10.3934/dcdsb.2017182

[20]

Mariusz Michta. On solutions to stochastic differential inclusions. Conference Publications, 2003, 2003 (Special) : 618-622. doi: 10.3934/proc.2003.2003.618

2017 Impact Factor: 0.972

Metrics

  • PDF downloads (22)
  • HTML views (49)
  • Cited by (3)

Other articles
by authors

[Back to Top]