# American Institute of Mathematical Sciences

September  2017, 22(7): 2731-2761. doi: 10.3934/dcdsb.2017133

## Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory

 Department of Mathematics, Center for Complex Biological Systems, University of California, Irvine, CA 92697, USA

* Corresponding author

Received  August 2016 Revised  December 2016 Published  April 2017

Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.

Citation: Christopher Rackauckas, Qing Nie. Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2731-2761. doi: 10.3934/dcdsb.2017133
##### References:

show all references

##### References:
Outline of the adaptive SODE algorithm based on Rejection Sampling with Memory. This depicts the general schema for the Rejection Sampling with Memory algorithm (RSwM). The green arrow depicts the path taken when the step is rejected, whereas the red arrow depicts the path that is taken when the step is accepted. The blue text denotes steps which are specific to the stochastic systems in order to ensure correct sampling properties which are developed in section 4.
Adaptive algorithm Kolmogorov-Smirnov Tests. Equation 31 was solved from $t=0$ to $t=2$. Scatter plots of the p-values from Kolmogorov-Smirnov tests against the normal distribution. At the end of each run, a Kolmogorov-Smirnov test was performed on the values at end of the Brownian path for 200 simulations. The $x$-axis is the absolute tolerance (with relative tolerance set to zero) and the $y$-axis is the p-value of the Kolmogorov Smirnov tests.
Adaptive algorithm correctness checks. QQplots of the distribution of the Brownian path at the end time $T=2$ over 10,000 paths. The $x$-axis is the quantiles of the standard normal distribution while the $y$-axis is the estimated quantiles for the distribution of $W_2 / \sqrt{2}$. Each row is for a example equation for Examples 1-3 respectively, and each column is for the algorithm RSwM1-3 respectively. $\epsilon=10^{-4}$. The red dashed line represents $x=y$, meaning the quantiles of a 10,000 standard normal random variables equate with the quantiles of the sample. The blue circles represent the quantile estimates for $W(2)/\sqrt{2}$ which should be distributed as a standard normal.
Adaptive Algorithm Error Comparison on Equation 31, Equation 33, and Equation 35. Comparison of the rejection sampling with memory algorithms on examples 1-3. Along the $x$-axis is $\epsilon$ which is the user chosen local error tolerance. The $y$-axis is the average $l^{2}$ error along the path. These values are taken as the mean for 100 runs. Both axes are log-scaled.
Adaptive Algorithm Timing Comparison. Comparison of the rejection sampling with memory algorithms. Along the $x$-axis is $\epsilon$ which is the user chosen local error tolerance. In the $y$-axis time is plotted (in seconds). The values are the elapsed time for a 1000 Both axes are log-scaled.
Solution to the Lorenz system Equation 39 on $t\in[0,10]$ with additive noise and parameters $\alpha=10$, $\rho=28$, $\sigma =3$, and $\beta=8/3$ at varying tolerances. The system was solved using ESRK+RSwM3 with the relative tolerance fixed at zero and varying absolute tolerances.
Stochastic cell differentiation model solutions. (A) Timeseries of the concentration of $[Ecad]$. The solution is plotted once every 100 accepted steps due to memory limitations. (B) Timeseries of the concentration of $[Vim]$. The solution is plotted once every 100 accepted steps due to memory limitations. (C) Accepted $h$ over time. The $h$ values were taken every 100 accepted steps due to memory limitations. (D) Elapsed time of the Euler-Maruyama and ESRK+RSwM3 algorithms on the stochastic cell model. Each algorithm was used to solve the model on $t\in[0,1]$ 10,000 times. The elapsed time for the fixed timestep methods for given $h$'s are shown as the filled lines, while the dashed line is the elapsed time for the adaptive method. The red circles denote the minimal $h$ and times for which the method did not show numerical instability in the ensemble.
ESRK1. Table (a) shows the legend for how the numbers in in Table (b) correspond to the coefficient arrays/matrices $c^{(i)}, A^{(i)}, B^{(i)}, \alpha, \beta^{(i)}, \tilde{\alpha}$, and $\tilde{\beta^{(i)}}$. For example, these tables show that $\alpha^T = (\frac{1}{3},\frac{2}{3},0,0)$. Note that the matrices $A^{(i)}$ and $B^{(i)}$ are lower triangular since the method is explicit.
 Algorithm 1 RSwM1 1: Set the values $\epsilon$, $h_{max}$, $T$ 2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$ 3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$ 4: While $t  Algorithm 1 RSwM1 1: Set the values$\epsilon$,$h_{max}$,$T$2: Set$t=0$,$W=0$,$Z=0$,$X=X_0$3: Take an initial$h$,$\Delta Z,\Delta W \sim N(0,h)$4: While$t
 Algorithm 2 RSwM3 1: Set the values $\epsilon$, $h_{max}$, $T$ 2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$ 3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$ 4: while $t  Algorithm 2 RSwM3 1: Set the values$\epsilon$,$h_{max}$,$T$2: Set$t=0$,$W=0$,$Z=0$,$X=X_0$3: Take an initial$h$,$\Delta Z,\Delta W \sim N(0,h)$4: while$t
Fixed timestep method fails and runtimes. The fixed timestep algorithms and ESRK+RSwM3 algorithms were used to solve the stochastic cell model on $t\in[0,1]$ 10,000 times. Failures were detected by checking if the solution contained any NaN values. During a run, if any NaNs were detected, the solver would instantly end the simulations and declare a failure. The runtime for the adaptive algorithm (with no failures) was 186.81 seconds.
 Euler-Maruyama Runge-Kutta Milstein Rößler SRI $\Delta t$ Fails (/10,000) Time (s) Fails (/10,000) Time (s) Fails (/10,000) Time (s) $2^{-16}$ 137 133.35 131 211.92 78 609.27 $2^{-17}$ 39 269.09 26 428.28 17 1244.06 $2^{-18}$ 3 580.14 6 861.01 0 2491.37 $2^{-19}$ 1 1138.41 1 1727.91 0 4932.70 $2^{-20}$ 0 2286.35 0 3439.90 0 9827.16 $2^{-21}$ 0 4562.20 0 6891.35 0 19564.16
 Euler-Maruyama Runge-Kutta Milstein Rößler SRI $\Delta t$ Fails (/10,000) Time (s) Fails (/10,000) Time (s) Fails (/10,000) Time (s) $2^{-16}$ 137 133.35 131 211.92 78 609.27 $2^{-17}$ 39 269.09 26 428.28 17 1244.06 $2^{-18}$ 3 580.14 6 861.01 0 2491.37 $2^{-19}$ 1 1138.41 1 1727.91 0 4932.70 $2^{-20}$ 0 2286.35 0 3439.90 0 9827.16 $2^{-21}$ 0 4562.20 0 6891.35 0 19564.16
$qmax$ determination tests. Equation 31, Equation 33, and Equation 35 were solved using the ESRK+RSwM3 algorithm with a relative tolerance of 0 and absolute tolerance of $2^{-14}$. The elapsed time to solve a Monte Carlo simulation of 100,000 simulations to $T=1$ was saved and the mean error at $T=1$ was calculated. The final column shows timing results for using ESRK+RSwM3 on the stochastic cell model from F solved with the same tolerance settings as in subsection 6.2 to solve a Monte Carlo simulation of 10,000 simulations.
 Example 1 Example 2 Example 3 Cell Model qmax Time (s) Error Time (s) Error Time (s) Error Time (s) $1+2^{-5}$ 37.00 2.57e-8 60.87 2.27e-7 67.71 3.42e-9 229.83 $1+2^{-4}$ 34.73 2.82e-8 32.40 3.10e-7 66.68 3.43e-9 196.36 $1+2^{-3}$ 49.14 3.14e-8 132.33 8.85e-7 65.94 3.44e-9 186.81 $1+2^{-2}$ 39.33 3.59e-8 33.90 1.73e-6 66.33 3.44e-9 205.57 $1+2^{-1}$ 38.22 3.82e-8 159.94 2.58e-6 68.16 3.44e-9 249.77 $1+2^{0}$ 82.76 4.41e-8 34.41 3.58e-6 568.22 3.44e-9 337.99 $1+2^{1}$ 68.16 9.63e-8 33.98 6.06e-6 87.50 3.22e-9 418.78 $1+2^{2}$ 48.23 1.01e-7 33.97 9.74e-6 69.78 3.44e-9 571.59
 Example 1 Example 2 Example 3 Cell Model qmax Time (s) Error Time (s) Error Time (s) Error Time (s) $1+2^{-5}$ 37.00 2.57e-8 60.87 2.27e-7 67.71 3.42e-9 229.83 $1+2^{-4}$ 34.73 2.82e-8 32.40 3.10e-7 66.68 3.43e-9 196.36 $1+2^{-3}$ 49.14 3.14e-8 132.33 8.85e-7 65.94 3.44e-9 186.81 $1+2^{-2}$ 39.33 3.59e-8 33.90 1.73e-6 66.33 3.44e-9 205.57 $1+2^{-1}$ 38.22 3.82e-8 159.94 2.58e-6 68.16 3.44e-9 249.77 $1+2^{0}$ 82.76 4.41e-8 34.41 3.58e-6 568.22 3.44e-9 337.99 $1+2^{1}$ 68.16 9.63e-8 33.98 6.06e-6 87.50 3.22e-9 418.78 $1+2^{2}$ 48.23 1.01e-7 33.97 9.74e-6 69.78 3.44e-9 571.59
 Algorithm 3 Initial $h$ Determination 1: Let $d_0 = \Vert X_0 \Vert$ 2: Calculate $f_0 = f(X_0,t)$ and $\sigma_0 = 3g(X_0,t)$ 3: Let $d_1 = \Vert \mathrm{max}(|f_0 +\sigma_0|,|f_0 -\sigma_0|)\Vert$ 4: if $d_0 < 10^{-5}$ or $d_{1} < 10^{-5}$ then 5: Let $h_0 = 10^{-6}$ 6: else 7: Let $h_0 = 0.01(d_0/d_1)$ 8: end if 9: Calculate an Euler step: $X_1 = X_0 + h_0f_0$ 10: Calculate new estimates: $f_1 = f(X_1,t)$ and $\sigma_0 = 3g(X_1,t)$ 11: Determine $\sigma_1^M = \mathrm{max}(|\sigma_0 +\sigma_1|),|\sigma_0 -\sigma_1|)$ 12: Let $d_2 = \Vert \mathrm{max}(|f_1-f_0 + \sigma_1^M|,|f_1-f_0 - \sigma_1^M|)\Vert/h_0$ 13: if $\mathrm{max}(d_1,d_2)<10^{-15}$ then 14: Let $h_1 = \mathrm{max}(10^{-6},10^{-3}h_0)$ 15: else 16: Let $h_1 = 10^{-(2+\log_10(\mathrm{max}(d_1,d_2))/(order+0.5)}$ 17: end if 18: Let $h=\mathrm{min}(100h_0,h_1)$
 Algorithm 3 Initial $h$ Determination 1: Let $d_0 = \Vert X_0 \Vert$ 2: Calculate $f_0 = f(X_0,t)$ and $\sigma_0 = 3g(X_0,t)$ 3: Let $d_1 = \Vert \mathrm{max}(|f_0 +\sigma_0|,|f_0 -\sigma_0|)\Vert$ 4: if $d_0 < 10^{-5}$ or $d_{1} < 10^{-5}$ then 5: Let $h_0 = 10^{-6}$ 6: else 7: Let $h_0 = 0.01(d_0/d_1)$ 8: end if 9: Calculate an Euler step: $X_1 = X_0 + h_0f_0$ 10: Calculate new estimates: $f_1 = f(X_1,t)$ and $\sigma_0 = 3g(X_1,t)$ 11: Determine $\sigma_1^M = \mathrm{max}(|\sigma_0 +\sigma_1|),|\sigma_0 -\sigma_1|)$ 12: Let $d_2 = \Vert \mathrm{max}(|f_1-f_0 + \sigma_1^M|,|f_1-f_0 - \sigma_1^M|)\Vert/h_0$ 13: if $\mathrm{max}(d_1,d_2)<10^{-15}$ then 14: Let $h_1 = \mathrm{max}(10^{-6},10^{-3}h_0)$ 15: else 16: Let $h_1 = 10^{-(2+\log_10(\mathrm{max}(d_1,d_2))/(order+0.5)}$ 17: end if 18: Let $h=\mathrm{min}(100h_0,h_1)$
SRA1. Table (a) shows the legend for how the numbers in in Table (b) correspond to the coefficient arrays/matrices $c^{(i)}, A^{(i)}, B^{(i)}, \alpha,$ and $\beta^{(i)}$. Note that the matrices $A^{(i)}$ and $B^{(i)}$ are lower triangular since the method is explicit.
 Algorithm 4 RSwM2 1: Set the values $\epsilon$, $h_{max}$, $T$ 2: Set $t=0$, $W=0$, $Z=0$, $X=X_0$ 3: Take an initial $h$, $\Delta Z,\Delta W \sim N(0,h)$ 4: while $t  Algorithm 4 RSwM2 1: Set the values$\epsilon$,$h_{max}$,$T$2: Set$t=0$,$W=0$,$Z=0$,$X=X_0$3: Take an initial$h$,$\Delta Z,\Delta W \sim N(0,h)$4: while$t
Table of Parameter Values for the Stochastic Cell Model.
 Parameter Value Parameter Value Parameter Value Parameter Value $J1_{200}$ 3 $J1_{E}$ 0.1 $K_{2}$ 1 $k0_{O}$ 0.35 $J2_{200}$ 0.2 $J2_{E}$ 0.3 $K_{3}$ 1 $kO_{200}$ 0.0002 $J1_{34}$ 0.15 $J1_{V}$ 0.4 $K_{4}$ 1 $kO_{34}$ 0.001 $J2_{34}$ 0.35 $J2_{V}$ 0.4 $K_{5}$ 1 $kd_{snail}$ 0.09 $J_{O}$ 0.9 $J3_{V}$ 2 $K_{TR}$ 20 $kd_{tgf}$ 0.1 $J0_{snail}$ 0.6 $J1_{zeb}$ 3.5 $K_{SR}$ 100 $kd_{zeb}$ 0.1 $J1_{snail}$ 0.5 $J2_{zeb}$ 0.9 $TGF0$ 0 $kd_{TGF}$ 0.9 $J2_{snail}$ 1.8 $K_{1}$ 1 $Tk$ 1000 $kd_{ZEB}$ 1.66 $k0_{snail}$ 0.0005 $k0_{zeb}$ 0.003 $\lambda_{1}$ 0.5 $k0_{TGF}$ 1.1 $n1_{200}$ 3 $n1_{snail}$ 2 $\lambda_{2}$ 0.5 $k0_{E}$ 5 $n2_{200}$ 2 $n1_{E}$ 2 $\lambda_{3}$ 0.5 $k0_{V}$ 5 $n1_{34}$ 2 $n2_{E}$ 2 $\lambda_{4}$ 0.5 $k_{E1}$ 15 $n2_{34}$ 2 $n1_{V}$ 2 $\lambda_{5}$ 0.5 $k_{E2}$ 5 $n_{O}$ 2 $n2_{V}$ 2 $\lambda_{SR}$ 0.5 $k_{V1}$ 2 $n0_{snail}$ 2 $n2_{zeb}$ 6 $\lambda_{TR}$ 0.5 $k_{V2}$ 5 $k_{O}$ 1.2 $k_{200}$ 0.02 $k_{34}$ 0.01 $k_{tgf}$ 0.05 $k_{zeb}$ 0.06 $k_{TGF}$ 1.5 $k_{SNAIL}$ 16 $k_{ZEB}$ 16 $kd_{ZR_{1}}$ 0.5 $kd_{ZR_{2}}$ 0.5 $kd_{ZR_{3}}$ 0.5 $kd_{ZR_{4}}$ 0.5 $kd_{ZR_{5}}$ 0.5 $kd_{O}$ 1.0 $kd_{200}$ 0.035 $kd_{34}$ 0.035 $kd_{SR}$ 0.9 $kd_{E}$ 0.05 $kd_{V}$ 0.05
 Parameter Value Parameter Value Parameter Value Parameter Value $J1_{200}$ 3 $J1_{E}$ 0.1 $K_{2}$ 1 $k0_{O}$ 0.35 $J2_{200}$ 0.2 $J2_{E}$ 0.3 $K_{3}$ 1 $kO_{200}$ 0.0002 $J1_{34}$ 0.15 $J1_{V}$ 0.4 $K_{4}$ 1 $kO_{34}$ 0.001 $J2_{34}$ 0.35 $J2_{V}$ 0.4 $K_{5}$ 1 $kd_{snail}$ 0.09 $J_{O}$ 0.9 $J3_{V}$ 2 $K_{TR}$ 20 $kd_{tgf}$ 0.1 $J0_{snail}$ 0.6 $J1_{zeb}$ 3.5 $K_{SR}$ 100 $kd_{zeb}$ 0.1 $J1_{snail}$ 0.5 $J2_{zeb}$ 0.9 $TGF0$ 0 $kd_{TGF}$ 0.9 $J2_{snail}$ 1.8 $K_{1}$ 1 $Tk$ 1000 $kd_{ZEB}$ 1.66 $k0_{snail}$ 0.0005 $k0_{zeb}$ 0.003 $\lambda_{1}$ 0.5 $k0_{TGF}$ 1.1 $n1_{200}$ 3 $n1_{snail}$ 2 $\lambda_{2}$ 0.5 $k0_{E}$ 5 $n2_{200}$ 2 $n1_{E}$ 2 $\lambda_{3}$ 0.5 $k0_{V}$ 5 $n1_{34}$ 2 $n2_{E}$ 2 $\lambda_{4}$ 0.5 $k_{E1}$ 15 $n2_{34}$ 2 $n1_{V}$ 2 $\lambda_{5}$ 0.5 $k_{E2}$ 5 $n_{O}$ 2 $n2_{V}$ 2 $\lambda_{SR}$ 0.5 $k_{V1}$ 2 $n0_{snail}$ 2 $n2_{zeb}$ 6 $\lambda_{TR}$ 0.5 $k_{V2}$ 5 $k_{O}$ 1.2 $k_{200}$ 0.02 $k_{34}$ 0.01 $k_{tgf}$ 0.05 $k_{zeb}$ 0.06 $k_{TGF}$ 1.5 $k_{SNAIL}$ 16 $k_{ZEB}$ 16 $kd_{ZR_{1}}$ 0.5 $kd_{ZR_{2}}$ 0.5 $kd_{ZR_{3}}$ 0.5 $kd_{ZR_{4}}$ 0.5 $kd_{ZR_{5}}$ 0.5 $kd_{O}$ 1.0 $kd_{200}$ 0.035 $kd_{34}$ 0.035 $kd_{SR}$ 0.9 $kd_{E}$ 0.05 $kd_{V}$ 0.05
 [1] Sihong Shao, Huazhong Tang. Higher-order accurate Runge-Kutta discontinuous Galerkin methods for a nonlinear Dirac model. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 623-640. doi: 10.3934/dcdsb.2006.6.623 [2] Da Xu. Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2389-2416. doi: 10.3934/dcdsb.2017122 [3] Vladimir Kazakov. Sampling - reconstruction procedure with jitter of markov continuous processes formed by stochastic differential equations of the first order. Conference Publications, 2009, 2009 (Special) : 433-441. doi: 10.3934/proc.2009.2009.433 [4] Nikolai Dokuchaev. On strong causal binomial approximation for stochastic processes. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1549-1562. doi: 10.3934/dcdsb.2014.19.1549 [5] Junhao Hu, Chenggui Yuan. Strong convergence of neutral stochastic functional differential equations with two time-scales. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5831-5848. doi: 10.3934/dcdsb.2019108 [6] Wei Mao, Liangjian Hu, Xuerong Mao. Asymptotic boundedness and stability of solutions to hybrid stochastic differential equations with jumps and the Euler-Maruyama approximation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 587-613. doi: 10.3934/dcdsb.2018198 [7] Ludwig Arnold, Igor Chueshov. Cooperative random and stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 1-33. doi: 10.3934/dcds.2001.7.1 [8] Xin Chen, Ana Bela Cruzeiro. Stochastic geodesics and forward-backward stochastic differential equations on Lie groups. Conference Publications, 2013, 2013 (special) : 115-121. doi: 10.3934/proc.2013.2013.115 [9] Zdzisław Brzeźniak, Paul André Razafimandimby. Irreducibility and strong Feller property for stochastic evolution equations in Banach spaces. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1051-1077. doi: 10.3934/dcdsb.2016.21.1051 [10] Can Huang, Zhimin Zhang. The spectral collocation method for stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 667-679. doi: 10.3934/dcdsb.2013.18.667 [11] Jasmina Djordjević, Svetlana Janković. Reflected backward stochastic differential equations with perturbations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1833-1848. doi: 10.3934/dcds.2018075 [12] Arnulf Jentzen. Taylor expansions of solutions of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 515-557. doi: 10.3934/dcdsb.2010.14.515 [13] Jan A. Van Casteren. On backward stochastic differential equations in infinite dimensions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 803-824. doi: 10.3934/dcdss.2013.6.803 [14] Igor Chueshov, Michael Scheutzow. Invariance and monotonicity for stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1533-1554. doi: 10.3934/dcdsb.2013.18.1533 [15] A. Alamo, J. M. Sanz-Serna. Word combinatorics for stochastic differential equations: Splitting integrators. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2163-2195. doi: 10.3934/cpaa.2019097 [16] Pingping Niu, Shuai Lu, Jin Cheng. On periodic parameter identification in stochastic differential equations. Inverse Problems & Imaging, 2019, 13 (3) : 513-543. doi: 10.3934/ipi.2019025 [17] Yaozhong Hu, David Nualart, Xiaobin Sun, Yingchao Xie. Smoothness of density for stochastic differential equations with Markovian switching. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3615-3631. doi: 10.3934/dcdsb.2018307 [18] Yongqiang Suo, Chenggui Yuan. Large deviations for neutral stochastic functional differential equations. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2369-2384. doi: 10.3934/cpaa.2020103 [19] Mei Ju Luo, Yi Zeng Chen. Smoothing and sample average approximation methods for solving stochastic generalized Nash equilibrium problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 1-15. doi: 10.3934/jimo.2016.12.1 [20] Ying Hu, Shanjian Tang. Switching game of backward stochastic differential equations and associated system of obliquely reflected backward stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5447-5465. doi: 10.3934/dcds.2015.35.5447

2018 Impact Factor: 1.008