EFFICIENT NUMERICAL METHODS FOR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH RANDOM COEFFICIENT

. Eﬃcient numerical methods for solving Poisson equation constraint optimal control problems with random coeﬃcient are discussed in this paper. By applying the ﬁnite element method and the Monte Carlo approximation, the original optimal control problem is discretized and transformed into an op- timization problem. Taking advantage of the separable structures, Algorithm 1 is proposed for solving the problem, where an alternating direction method of multiplier is used. Both computational and storage costs of this algorithm are very high. In order to reduce the computational cost, Algorithm 2 is pro- posed, where the multi-modes expansion is introduced and applied. Further, for reducing the storage cost, we propose Algorithm 3 based on Algorithm 2. The main idea is that the random term is shifted to the objective functional, which could be computed in advance. Therefore, we only need to solve a deter- ministic optimization problem, which could reduce all the costs signiﬁcantly. Moreover, the convergence analyses of the proposed algorithms are established, and numerical simulations are carried out to test the performances of them.

1. Introduction. It is well known that many physical and engineering problems could be described by the optimal control problems (OCPs) with partial differential equation (PDE) or random/stochastic partial differential equation (RPDE/SPDE) constraints, such as the optimal heat source problem, the optimal design of the aircraft, and the weather forecasting problem etc. For the deterministic PDE optimal control problem, there exists plenty research results, and we refer to [15,21,26] for more details. However, only few works have been developed for the RPDE/SPDE optimal control problem.
Generally speaking, the numerical methods for RPDE/SPDE constraint optimal control problems are divided into two categories: sample methods and projection methods. For the former, Cao et al. proposed an improved Monte Carlo method using compression variance technique to deal with optimal control problems under random Burger constraints in 2003 [7]. Based on the gradient error, Kouri et al. combined the trust region method and the stochastic collocation method to propose an adaptive collocation method, which is superior to the traditional Newtonian conjugate gradient method [17]. For the latter, Xiu developed a polynomial chaos expansion for the target functional, which transforms the original SPDE constraint optimal control problem to a nonlinear problem with finite coefficients [25]. However, no theoretical guarantee was given there. In 2011, Gunzburger and his collaborators transformed the SPDE constraint optimal control problem into a KKT nonlinear system, and then proved the existence of the solution of the KKT system. By using Karhunen Loeve (KL) expansion or appropriate orthogonal decomposition, they obtained a series of deterministic optimization systems numerically [13]. In 2013, Kunoth and Schwab used generalized polynomial chaos expansion (gPC) to handle the RPDE/SPDE optimal control problems [18]. Recently, Taylor approximation and variance reduction method was proposed by Chen et al. for PDE-constrained optimal control under uncertainty [8].
In this paper, we mainly concentrate on the efficient numerical methods for the optimal control problem constrained by the Poisson equation with random coefficient. The major challenges for dealing with the randomness are: (I) How to deal with the uncertainties appeared in the constraint and the objective functional, and formulate a discretized optimization problem. (II) How to solve the resulted optimization problem, and propose an efficient algorithm. We will analyze the challenges above one by one, and present the corresponding solutions.
Monte Carlo (MC) method [5,7], the stochastic collocation (SC) method [1,2], the stochastic Galerkin (SG) method [22,23], the polynomial chaos (PC) expansion [16], and so on are some of the commonly used methods for dealing with the first issue. Based on the generality and parallelizability of MC method, we will apply it to deal with the random sample space. Further, using the finite element method (FEM) to approximate the physical space, we shall obtain a discretized optimization problem. This is the most natural idea, but it will face huge difficulties in computation and storage when the high accuracy solution is required. The KL expansion is a spectral representation of the covariance function, which consists of the eigenvalues and eigenfunctions. This method is widely used in RPDE/SPDE fields when the eigenvalues decay exponentially or non-Gaussian spatially-dependent random with large correlation length, otherwise it will run into the curse of dimensionality [3]. As an improvement, the multi-modes expansion (MME) [11] will be considered to approximate the state variable firstly, and then carry out MC and FEM subsequently. By exploiting the relationships among the terms of MME expansion, we can transform the discretized optimization problem into a quadratic programming problem with deterministic linear constraints. In this paper, we will follow this approach to resolve the first challenge.
The second challenge is closed related to the first one. If the first challenge can be handled very well, the latter one will be much less difficult. For solving the resulted optimization problem, there exist fruitful algorithms, such as the gradient descent method, the conjugate gradient method, the Newton's method, the regularization method, and the alternating direction method of multipliers (ADMM) etc., the interested readers are referred to [13,21] and references therein for the rich literature. Because of the separability of state and control of the resulted optimization problem, ADMM is applied in this paper, which has been used in the optimal control problem with deterministic PDE constraints, and displayed satisfactory results [14,26]. Moreover, to the best of our knowledge, there is no complete convergence analysis for elliptic RPDE-constrained optimization problems. The existing results merely present the difference between the exact solution and the discretized solution [13,19], while the error for solving the optimization problem is ignored. Here, based on the convergence analysis of ADMM in [6] and the techniques for dealing with the former challenge, the global error estimate in terms of the deviation between objective functionals will be established for our proposed algorithms later.
For the clarity of the structure, the efficient algorithm proposed in this paper is developed by three steps. Firstly, the MC, the FEM, and the ADMM are used sequentially to formulate a primitive algorithm, Algorithm 1, which needs to compute M + 1 inverses of N × N matrices, and store O(M + 1) matrices (see Section 3), where M and N denote the sample number in probability space and the degree of freedom in physical space, respectively. To reduce the computation cost, we propose algorithm 2, where the MME technique is introduced. Using this algorithm, we only need to compute inverse matrix twice, but still need to store O(M + 1) matrices (see Section 4). As the last step, by applying the recurrence relation on MME, the random term in the constraint is shifted to the objective functional. Then, based on Algorithm 2, we propose Algorithm 3, in which we only need to compute inverse matrix twice, and store O(1) matrices (see Section 5). Naturally, Algorithm 3 reduces the computation cost and storage cost significantly.
The outline of this paper is as follows. In Section 2, the model problem, the well-posedness of the solution, and the first order optimal condition of the model problem are introduced. Algorithm 1, 2, 3, and their convergence analyses are presented in Section 3, 4, and 5, respectively. In Section 6, numerical simulations are carried out to test the performance of the proposed algorithms. Finally, some conclusion remarks are given in Section 7.
2. Optimal control problem with random coefficient. The model problem, the well-posedness of the studied model, and its corresponding first order optimal condition will be presented in this section.
2.1. The model in concern. For easy of description, most of the notations and definitions used in this paper are adopted from the reference [9]. We consider the model problem on a polygon D ⊂ R 2 with the boundary ∂D. L p (D), L p (∂D), H p (D), and H p (∂D) denote the Lebesgue integrable function spaces and Hilbert spaces on D and ∂D, respectively (1 ≤ p ≤ +∞). In addition, we introduce a standard probability space (Θ, F, P), and a Bochner space where X stands for a Hilbert space. Using the notions above, the Poisson equation constraint optimal control problem with random coefficient and Dirichlet boundary condition could be written as where y d ∈ L 2 (D) and u d ∈ H 2 (D) stand for the desirable state variable and the control variable respectively, which are given in advance. The random diffusion coefficient α(ξ, x) belongs to L 2 (Θ, D), where ξ represents a random sample in Θ.
The expectation of ρ(ξ) is defined by E[ρ(ξ)] = Θ ρ(ξ) dP(ξ), and the boundary condition g ∈ L 2 (∂D) is a known function, which is set to be 0 in this paper for simplicity.
The corresponding variational formulation of the Poisson equation with random coefficient is given by: Finding y ∈ L 2 (Θ, with a(y, v) = (α(ξ, x)∇y, ∇v) L 2 (D) . In order to ensure the well-posedness of the variational formulation (2), the following assumptions on α are needed.

2.2.
First order optimal condition. For simplicity, the diffusion coefficient α(ξ, x) in (OCP) is set to be α(ξ, x) = 1 + εσ(ξ, x), (5) other cases could be considered similarly. Here, σ(ξ, x) ∈ L 2 (Θ, L ∞ (D)) denotes the random process satisfying P{ξ ∈ Θ; σ(ξ, x) L ∞ (D) ≤ 1} = 1, and ε stands for the magnitude of the perturbation. It is not difficult to obtain that the random coefficient α defined in (5) satisfies Assumption 1. Therefore, according to Lemma 2.1, there exists a linear operator T : L 2 (D) → L 2 (Θ, H 1 (D)), such that where u and y satisfy the variational equation (2). Further, it follows from the inequality (3) that the following estimate holds, where j = 2 or 4, and C 1 is a constant depending on the bound of α. Therefore, if the state variable space is Y = L 2 (Θ, H 1 0 (D)), and the control variable space is U = L 2 (D), the optimal control problem (OCP) could be rewritten as which can also be reformulated as min Here, J is a functional from L 2 (D) to R, and the existence and uniqueness results of the solution for (8) could be found in [16,19]. Finally, let u * be the optimal solution of (8). By applying the definition of the Fréchet derivative [9], the linearity of the expectation operator E, and the solution operator T defined in (6), the first order optimal condition of (P) can be written as where T * is the adjoint operator of T . Now, the optimal control problem (P), and its first order optimal condition have all been presented. In the remainder of the paper, we will propose three algorithms for solving the optimal control problem (P) incrementally, and compare them with each other.
3. FEM-MC-ADMM for the model problem. The first algorithm studied in this paper is presented in this section. The motivation comes from the finite element method (FEM), the Monte Carlo approximation (MC), and the alternating directions method of multipliers (ADMM). We will carry out these techniques sequentially to solve the optimal control problem (P), and present an global error estimate in the form of the deviation between the objective functionals.
3.1. FEM discretization. Let K h be a regular triangulation of the spatial domain D, then the linear finite element function space could be defined by where P 1 is the space of polynomials of degree less than or equal to 1. Further, let R = R(x) = (φ 1 (x), · · · , φ N (x)) denotes a vector valued function consisting of all the basis functions of V h , the state variable y(ξ, x) ∈ L 2 (Θ, H 1 (D)) can be approximated by which belongs to Y h = L 2 (Θ, V h ). Here, the vector y(ξ) = (y 1 (ξ), · · · , y N (ξ)) T ∈ R N for any fixed ξ ∈ Θ.
Based on above notations, for a fixed sample ξ ∈ Θ and a given control variable u ∈ L 2 (D), the FEM approximation for the constraint equation of (OCP) shall be described as : This means there exists an operator T h : It is easy to see that T h is a linear operator.
The proof, which will be omitted here, follows from the coerciveness and continuity of the bilinear form a(u, v). The interested readers are referred to [15]. Now, we generalize the stability estimate of the deterministic problem to the random coefficient case. It is well known that the FEM approximation of (2) is: By applying the Assumption 1 and the estimate (12), we can derive: For any control variable u ∈ L 2 (D), if y h is the FEM solution of (13), then we have where j = 2, 4, and C 2 is a positive constant depending on α and j.
where j = 2, 4, and C 3 is a positive constant independent of the mesh size h.
Similar to (10), the control variable u could be approximated by where u = (u 1 , · · · , u N ) T ∈ R N . Substituting the state variable y and the control variable u in (P) by y h and u h , respectively, we can get a stochastic optimization problem

MC approximation.
In this subsection, we mainly apply the standard MC method to approximate the expectation E in (P 1h ). Let {ξ j } M j=1 (M 1) be the identically distributed independent samples chosen from the probability space (Θ, F, P ). For any state variable y ∈ L 2 (Θ, H 1 (D)), by using the law of large numbers and the central limit theorem, we can approximate the expectation which satisfies the error estimate (cf. [4]) Substituting the MC approximation (17) into (P 1h ), we can derive the following discretized optimization problem: Similar to (P), the optimization problem (P M 1h ) has a unique solution u * M,h ∈ V h , which satisfies the first order optimal condition where T * h is the adjoint operator of T h , and Q h is the L 2 projection operator from Using the first order optimal conditions (9) and (20), we can get Lemma 3.4. ( [19]) Let u * and u * M,h be the optimal controls of (P) and (P M 1h ), respectively, we have the following estimate where the constant C is independent of M and h.
Further, applying the definitions of the functionals F (y, u) and F M,h (y h , u h ), the operators T and T h , and the error estimate (21) in Lemma 3.4, we can get Theorem 3.5. Let (y * , u * ) and (y * M,h , u * M,h ) be the optimal solutions of the optimal control problem (P) and it approximation (P M 1h ), respectively, then we have wherē For the sake of structural consideration, we put the proof in Appendix.
3.3. ADMM algorithm for (P M 1h ). In this subsection, we will introduce the ADMM to solve the optimization problem (P M 1h ), and present the error estimate of this algorithm. For these purposes, we need to rewrite the optimization problem (P M 1h ) in the matrix-vector form firstly. Replacing y h and u h in (19) by Ry(ξ) and Ru, which have been defined in (10) and (16), the optimization problem (P M 1h ) can be rewritten as The augmented Lagrangian functional of the optimization problem ( P M 1h ) is where λ(ξ) ∈ R N is the Lagrange multipliers and β denotes the penalty parameter.
Here and hereafter, the notation (·, ·) represents the inner product of vectors, and · stands for the l 2 norm of a vector. It is clear that the optimization problem ( P M 1h ) equals the saddle-point problem (cf. [21]) min For the saddle-point problem (23), the ADMM is an efficient algorithm. We will apply ADMM to solve this problem, and the whole process can be described by Notice that every iteration of the ADMM has the closed-form solution with j = 1, . . . , M . Now, we consider the convergence of the ADMM. It is easy to find that the stiffness matrixs A 1 (ξ j ), j = 1, . . . , M , and the mass matrix B 1 are all symmetric Algorithm 1 FEM-MC-ADMM.

Begin
Let t = 0, and set the initial values ε 0 positive definite matrices, and F M,h (y(ξ), u) is a convex functional. For the deterministic case under conditions above, the convergence result has been established in [6]. Applying the Assumption 1, it is not hard to obtain the following convergence result for the random case (cf. [20,24]).
where (y * M , u * M ) is the theoretical solution of (23), and Further, applying the Theorem 3.5, the identity , the Lemma 3.6, and the triangle inequality, we can get the global error estimate.
Theorem 3.7. A global error estimate for Algorithm 1 is as follows: From Algorithm 1, we can see that the stiffness matrix A 1 (ξ) is a random one, which needs to be computed and stored for every sample ξ j , j = 1, . . . , M . The cost to compute A 1 (ξ) and the mass matrix B 1 is O((M + 1)N 2 ). In addition, from the closed form solutions (24), we also need to calculate and save the inverse matric γB 1 + βB T 1 B 1 −1 , and the inverse matrices B 1 + βA 1 (ξ j ) T A 1 (ξ j ) −1 for all the samples. Thus, we must compute M + 1 inverses of N × N matrices, which will cost a large amount of memory and computation time.
4. MME-FEM-MC-ADMM for the model problem. To reduce the computational cost in Algorithm 1, we shall introduce the multi-modes expansion (MME), and propose a new algorithm. The main idea is to approximate the state variable by MME technique, and then transform the original constraint in Algorithm 1 and thus shift the randomness to the right hand side mass matrix. The advantage is that we can calculate the expectation of the right hand side random matrix in advance, then compute and store the inverse of the matrix only once, which could reduce the computational cost significantly.

4.1.
Multi-modes expansion. The multi-modes expansion for the state variable y can be expressed as (cf. [11]) where ε is the magnitude of the perturbation defined in (5), and y q ∈ L 2 (Θ, H 1 0 (D)). Substituting (27) into the state equation of the optimal control problem (OCP), and comparing the coefficients of the powers of ε, we could get the following Poisson equations in recursive form.
It is apparent that the relationship between y 0 and u is deterministic, so we can solve the equations (28) by recursion for q ≥ 1. The well-posedness of the MME solutions has been established in Theorem 3.1 of the reference [11].
For the purpose of numerical solution, we truncate the expression (27) after the first Q terms, and denote it by Feng et al. [11] showed that the truncation error of MME after Q terms is O(ε Q ).
where j = 2 or 4, C 4 is a positive constant depending on α, Q and j.

FEM and MC approximation.
In this subsection, we will apply FEM to discretize y q (q ≥ 0) on physical space, and MC method to approximate the expectation E on probability space. Then the discretized optimization problem, its first order optimal condition, and the convergence analysis for the proposed approach will be presented sequentially. For any fixed ξ ∈ Θ, the FEM approximation of y q (ξ, x) in (28) can be written as where y q (ξ) ∈ R N . Naturally, the truncated MME state y Q (ξ, x) could be approximated by And the variational formulation of the constraint equation in the truncated MME form is given by: for any v h (x) ∈ V h , where q = 1, . . . , Q − 1. Similar to (31), there exists a linear operator T Q h : Lemma 4.4. (cf. [11]) Under the assumption of Lemma 4.2, for j = 2, 4 we have the following stability estimate and the error estimate where C 5 and C 6 are constants depending on ε and Q.
Further, applying the triangle inequality, the estimates (32) and (38), we have the following error estimate.
By arguments similar to the ones used to get the optimization problem (19), applying the expressions (16), (34) and (36), the original problem (P) could be approximated by where F M,h has been given in (19). Similar to (P M 1h ), the optimization problem (P Q 2h ) has a unique solution u Q * M,h ∈ V h , which satisfies the first order optimal condition Using the first order optimal conditions (9) and (40) holds, where Similar to Theorem 3.5, we can get the error estimate on the difference of the objective functionals, as follows Theorem 4.7. Let (y * , u * ) and (y Q * M,h , u Q * M,h ) be the optimal solutions of the optimal control problem (P) and its approximation (P Q 2h ), respectively, then we have The proof is similar to Theorem 3.5, we omit it here.

ADMM algorithm for (P Q 2h
). In this subsection, the ADMM algorithm for solving the optimization problem (P Q 2h ) will be presented. In order to describe the algorithm clearly, we simplify (P Q 2h ) firstly. Substituting the control variable u in the variational formulation (35) by u h defined in (16), and using the definition (33) on y q,h , the constraint of the optimization problem (P Q 2h ) could be rewritten as with Applying the definition of y Q h in (34), we know that Further, combining above equality with the equations (43), the relation between state variable y Q (ξ) and control variable u can be given by Therefore, the optimization problem (P Q 2h ) can be reformulated as: where the functional F M,h is defined in (23). The optimization problem ( P Q 2h ) is equivalent to the saddle point problem min with the augmented Lagrangian function where λ(ξ) ∈ R N is the Lagrange multipliers and β denotes the penalty parameter. Now, the ADMM algorithm for solving the saddle point problem (44) could be described by The ADMM process has the following closed form solutions Algorithm 2 MME-FEM-MC-ADMM.
From the iterations (45), we can see that all the matrices are deterministic except the matrix B 2 (ξ). Fortunately, the matrix E M [B 2 (ξ) T B 2 (ξ)] could be obtained in advance for the given samples. Therefore, we need to compute only two inverse matrices, which could reduces the computational cost on the inverse matrices in the Algorithm 1 greatly. However, in addition to the cost of computing and storing A 2 , we still need to compute and store the mass matrices B 2 (ξ j ) for all samples. Thus, the storage requirement does not decrease, it is still O((M + 1)N 2 ). 5. SMME-FEM-MC-ADMM for the model problem. In this section, a new algorithm is proposed to overcome the disadvantages of the Algorithm 2, which can reduce the storage requirement significantly.
From the variational formulation (35), we know that y q depends on the preceeding term y q−1 for q ≥ 1. Therefore, we can establish a relationship between y 0 and y q . Furthermore, applying the definition (30) of y Q , there must exist an relation between y 0 and y Q . Based on above facts, using the equations (34) and (43), we can obtain where P (ξ) is a matrix with random variables, defined as follows The relationship (47) implies that y Q (ξ) is determined by y 0 , so we need to consider only the constraint on y 0 . The optimization problem ( P Q 2h ) shall be simplified as where the functional F M,h is defined in (23), A 3 = A 2 and B 3 = B 1 are two deterministic matrices defined in (43). This problem is a classical optimization problem, which is the same as with the augmented Lagrangian function where λ ∈ R N is Lagrange multiplier independent of the samples, and β is the penalty parameter. Now, we present the ADMM algorithm to solve the saddle point problem (48).

Begin
Let t = 0, and set the initial values ε 0 3 , u 0 , λ 0 ; Compute the vectors c and d, the matrices A 3 and B 3 ; for j = 1, 2, · · · , M do Generate the matrices B 0 (ξ j ) and P (ξ j ); end for while ε t 3 > ε 0 3 do By straightforward calculations, we can get It is not hard to see that Algorithm 3 has the same convergence property as Algorithm 2 (see Theorem 4.8). From the iterations (49), we can see that we don't need to store the information for all the samples. The only matrices needing to be remembered are E M P (ξ) , A 3 , and B 3 , which means the memory cost is O(N 2 ). Moreover, we need to compute inverse matrices only twice. The above advantages are in sharp contrast with the Algorithm 1 and Algorithm 2. Hence, Algorithm 3 is the most efficient one both in computational and storage aspects. 6. Numerical experiments. In this section, results of numerical experiments, which are carried out to test the performance of the proposed algorithms, will be reported. The convergence order of the FEM for all algorithms is verified firstly. And then, the ADMM convergence order in the form of the objective function is shown. Finally, to illustrate the advantage of Algorithm 3, we present the storage costs of the three algorithms. All the simulations are conducted using Matlab on a high-performance computer with 16 kernels and 256 GB RAM.
We consider the optimal control problem on the spatial domain D = [−1, 1] 2 , and assume the random variable σ(x, ·) associated with the random coefficient α(ξ, x) obeys the uniform distribution on the interval of [−1, 1] for any fixed x ∈ D. The weighted parameter γ of the objective functional is set to be 1, the desired state and control are set to be y d = sin(πx)sin(πy), u d = 2π 2 sin(πx)sin(πy).
It is not difficult to find that y d and u d are the optimal state and control of the original problem (P) in the expected sense. In addition, the parameters associated with the ADMM are set to be β = 1, ε 0 i = 10 −4 , i = 1, 2, 3, respectively. To verify the convergence order of the FEM, we fix other parameters firstly. For simplicity, the sample number of the MC, the iteration steps of the ADMM, the magnitude of the perturbation defined in (5), and the truncated length of the MME are set to be M = 1000, t = 20, ε = 10 −4 , and Q = 1, respectively. Other cases could be discussed in similar manners. Now, we carry out our algorithms on the nested meshes with h = (1/2) i , i = 1, 2, 3, 4, 5, and present the results in Figure 1. We emphasize that Algorithm 1 is out of memory even with spatial meshsize h = 1/32. For aesthetics, we extend the line to the fifth point. From Figure 1, we can see that the FEM convergence order with respect to the control variable u and the state variable y are all of order 2 for the proposed three algorithms, which coincide with the theoretical analysis in Sections 3-5. To illustrate the correctness of the algorithms intuitively, the images of the optimal control in theory, and the numerical solutions on the proposed algorithms with h = 1/32 are shown in Figure 2, where the numerical solution for Algorithm 1 is obtained with h = 1/16 for the memory limit.
Now, we will test the performance of ADMM. For this purpose, other parameters in the proposed algorithms need to be fixed in advance. Without loss of generality, we set M = 1000, h = 1/32, and Q = 1 for the latter two algorithms and the   Figure 3. From the results, we can see that the convergence rate is superior to O( 1 t ), which means that the numerical performance of ADMM is better than the theoretical case.  Finally, we analyze the CPU memory cost of the proposed algorithms, which is one of the main concerns in this paper. For the fixed MC parameter M = 1000, the MME perturbation magnitude ε = 10 −4 and the iteration number t = 20, the variation trends of the CPU memory cost with respect to the FEM parameter h and the MME parameter Q are given in Table 1. From Table 1, we can find three obvious facts: (1) For any one of the three algorithms, the memory cost increase as the parameter h decreases. (2) For any one of the Algorithm 2 and 3, the memory cost does not have obvious change with respect to the parameter Q. (3) For any fixed h and Q, the memory cost of Algorithm 3 is significantly lower than other two algorithms. The above facts agree with the theoretical analysis in Section 3, 4, and 5 very well. In order to observe the storage changes of the three proposed algorithms with respect to the sample number M , we fix the FEM parameter h = 1/16, the stochastic perturbation parameter ε = 10 −4 and the ADMM iteration number t = 20. The CPU memory costs with different cases are presented in Table 2. From Table 2, we can see that (1) Table 1 and Table 2, we can conclude that Algorithm 3 is an efficient method for the optimal control problem with random coefficient.

7.
Conclusions. In this paper, three algorithms are proposed for the elliptic equation constraint optimal control problem with random coefficient. The Algorithm 1 is based on the FEM discretization, the MC approximation, and the ADMM iteration, which has global convergence. However, the computational cost and the storage cost are both very high. As an improvement, the Algorithm 2 is proposed, which reduce the computational cost by using the MME technique, but memory cost remain high. To overcome the disadvantage of Algorithm 2, Algorithm 3 is proposed, which reduce the memory cost significantly by transforming the constraint with random term to a deterministic one. Numerical simulations are carried out to verify the theoretical analysis, which illustrate the efficiency of the Algorithm 3 further. The applications of the proposed algorithms to other random optimal control problems will be discussed in our future works.
Appendix. The proof of Theorem 7.