A MULTI-MODE EXPANSION METHOD FOR BOUNDARY OPTIMAL CONTROL PROBLEMS CONSTRAINED BY RANDOM POISSON EQUATIONS

. This paper develops eﬃcient numerical algorithms for the optimal control problem constrained by Poisson equations with uncertain diﬀusion coeﬃcients. We consider both unconstrained condition and box-constrained condition for the control. The algorithms are based upon a multi-mode ex- pansion (MME) for the random state, the ﬁnite element approximation for the physical space and the alternating direction method of multipliers (ADMM) or two-block ADMM for the discrete optimization systems. The compelling aspect of our method is that, target random constrained control problem can be approximated to one deterministic constrained control problem under a state variable substitution equality. Thus, the computing resource, especially 2020 Mathematics Classiﬁcation. 90C30, 65K10, 65C05, 65N30. the memory consumption, can be reduced sharply. The convergence rates of the proposed algorithms are discussed in the paper. We also present some numerical examples to show the performance of our algorithms.

the memory consumption, can be reduced sharply. The convergence rates of the proposed algorithms are discussed in the paper. We also present some numerical examples to show the performance of our algorithms.
1. Introduction. In recent years, optimization problems governed by PDEs with uncertain coefficients have been the subject of growing interest in the scientific community [11,3,22]. This subject lies at the interface of PDEs with uncertain coefficients, optimization in Banach spaces, and stochastic programming.
In this paper, we study an optimal control problem constrained by a Poisson equation with uncertain diffusion coefficients. The control is a determinate function in the Dirichlet boundary condition. Our goal is to design efficient algorithms for this stochastic boundary control problem.
Deterministic optimal control problems constrained by partial differential equations have been well developed and investigated for several decades. While, there are not as much papers devoted to the random optimal control problems. In contrary to the deterministic problems, it is more expensive to get the numerical solution for the random optimization problem. Since the parameters are uncertain, the resulting states are random fields, i.e., random variables. And the inclusion of the stochastic dimension brings in additional freedom in the cost functionals. Therefore, we need to discretize the governing PDE in space and approximate the variables in random field. Moreover, it needs special probability theories to analyze and handle the stochastic domain in the optimal control problem. Existent algorithms for this problem always need to solve a large number of deterministic PDEs at each optimization iteration, and the memory consumption is huge. For the random discretization, commonly used methods are mainly divided into two categories: projection-based methods, such as stochastic Galerkin (SG) method and generalized polynomial chaos (gPCs) method, etc.; and sample-based methods, such as Monte Carlo (MC) method and stochastic collocation (SC) method, etc.. Naseri and Malek apply the SG method combined with preconditioned Newton's conjugate gradient method to solve an SPDE-constrained optimization problem with random force [28]. Cao designs an efficient MC method with variance reduction technique to solve random Burgers equation-constrained optimization problem [9]. In [30], the authors apply the SC method together with a gradient descent for the SPDE constrained control problem. Kouri and his collaborators improve the efficiency of SC method by adopting adaptive sparse-grid collocation with a trust-region framework for the random discretization [23]. Currently, a domain decomposition techniques have been applied for the random optimal control problems in [21]. In [5], the authors presented a low-rank tensor method to discrete PDE-constrained optimization problem. And we refer to [1,24,25] for some theoretical results.
The SG method provides a solid mathematical framework for the analysis and algorithms design, but it is not always the most computationally efficient means of solving large problems. While, the stochastic collocation method is often more popular than the projection-based method for it only needs to solve a collection of decoupled deterministic problems instead of a stochastic problem. However, the collocation method usually be exploited to a large class of random PDE-constrained optimization problems at the cost of losing the non-intrusivity property(see [4,32]). Over the past several year, a new stochastic discretization method has been proposed by Feng and his colleges for the Stochastic PDEs, which shows higher efficiency than traditional methods, see [13,15,16,14]. Inspired by this idea, we developed a multi-mode representation preprocessing technique for the optimal control problems constrained by random Helmholtz equations in [26]. This technique combines the merits of projection-based methods and sample-based methods, which reduces the number of equations evaluations sharply. Meanwhile, it is robust and easy to implement. The computational complexity is similar to solve a deterministic PDE-constrained control problem and a mathematical expectation of random matrices.
The main focus of this paper is on the efficient discretize then optimize numerical method for solving the optimal control problem constrained by random Poisson equation with deterministic control. While most of existing articles for the random PDEs constrained control problem are only concerned with distributed parameter controls, it is not a realistic situation since such controls are not easy to implement in practice. We think that boundary controls are much more natural and only consider the model problems with these controls. In order to overcome the difficulties of unbearable computation and memory consumption, we first do some pre-processing for the model problem with a modified multi-mode expansion (MME) form. The key idea is to transfer the randomness from the coefficients in the state equations to the coefficients in the cost functional, in which the random field is much easy to handle. With the MME technique, we can convert the random state equation to a deterministic one. Then discrete the physical space by the finite element method (FEM). After approximating the expectation in the cost functional with Monte Carlo method, we get a deterministic PDE-constrained control problem which can be solved by an alternating direction method of multipliers (ADMM) fastly. The memory cost of the proposed algorithm is only O(N 2 ). The theoretical analysis in this paper are more difficult than those in the random Helmholtz equations constrained control problems for the randomness of diffusion coefficients in the state equations. We prove the existence of optimal control and deduce the global error estimate of the algorithm.
The rest of the paper is organized as follows. In Section 2 we present an optimality problem constrained by random Poisson equations under unconstrained control (UC) condition and box-constrained control (BCC) condition. The existence of optimal solutions and the first order necessary conditions are also discussed in this part. In Section 3, we first deduce the multi-modes representation approximation scheme for the model problem, then discrete the physical space by FEM. After that, we transform the new random discrete optimal control problem into a deterministic optimization problem. This deterministic problem will be solved by ADMM and two-block ADMM for the UC and BCC cases respectively in section 4. Section 5 presents the global error estimates for two algorithms. And Section 6 provides numerical experiments which demonstrate the efficiency of our algorithms. The last section is devoted to some concluding remarks.
Let (Ω, F, P ) be a complete probability space, where Ω is the set of outcomes, F is the σ-algebra of events and P is the probability measure. Let the Bochner space L p (V, Ω) be a space of strongly measurable functions w ranging in a separable Hilbert space V with the norm Before listing the model problem, a detailed description of the state equations and some necessary properties will be given. Suppose that the deterministic control variable u(x) and the random state variable y(x, ξ) satisfies the following random elliptic equations, In the elliptic equations, a(x, ξ) is the diffusion coefficient with a small random perturbation defined by: Here, the random process η ∈ L 2 (L ∞ (D); Ω) satisfies and the parameter ε represents the magnitude of the random perturbation.
Proof. The existence of the solution to the equation (5) can be deduced from the Lax-Milgram Theorem. And the regularity estimate (6) follows by the elliptic regularity theory together with the transposition method (see [27]).
Note Y := L 2 (H 1+σ (D), Ω) and U := H 1/2+σ (∂D), 0 < σ ≤ 1 as the state space and control space respectively. Lemma 2.2 indicates that for all u ∈ U , there is exactly one random state y = y(u) ∈ Y . So we use the continuous mapping S in the following to denote the "random weak solution operator" of equation (5), i.e.
From (6), the estimate of the operator S is In next part, the target random optimal control problem will be represented in the form of the "random weak solution operator". By S, we are able to formally do the FEM error analysis for the model problem.

2.2.
Random elliptic equation constrained control problem. With the definition of S in (7), the optimal control problem with random elliptic constraint is given by where in which y d ∈ Y and u 0 ∈ U are given functions. In this paper, two kinds of control constraint U ad are considered: one is the unconstrained case (UC) with U ad := U , and the other is the box-constrained case Firstly, the existence and uniqueness results for the problem (P ) are shown in the following theorem. Theorem 2.3. Under the assumption (2.1), and α > 0, the problem (P ) has a unique pair of optimal solutions (ȳ,ū).
Proof. Denote the feasible set by Since F 0 and W is nonempty, the infimum exists and then we can find a minimizing sequence (y n , u n ) ⊂ W with lim n→∞ F (y n , u n ) = F * .
The sequence {u n } is bounded, since F (y n , u n ) ≥ α 2 u n 2 L ( ∂D) . {u n } is included in a complet space H 1/2+σ (∂D), so there exists a weakly convergent subsequence {u nj } ⊂ {u n } with u nj ū as j → ∞. It is easy to know that S is a bounded and linear operator, then y nj ȳ = S(ū), as j → ∞.
It follows from the closedness and convexity of the product space Y × U ad that (ȳ,ū) ∈ Y × U ad , and (ȳ,ū) satisfies the state equationȳ = S(ū), so (ȳ,ū) ∈ W . Due to the expression of the cost function (10), (y, u) ∈ Y × U → F (y, u) is continuous and convex. Hence, F is weakly lower semicontinuous and using that (ȳ,ū) is feasible yields Thus, (ȳ,ū) is the optimal solution of (P ). And with α > 0 and the linearity of the operator S, it is easy to find that u → F (S(u), u) is strongly convex, which implies the uniqueness.
Next, we consider the optimality conditions for the optimal control problem (P ) under unconstrained case and box-constrained case. The problem (P ) can be rewrited in the following reduced form where For the unconstrained case (UC): Under this condition, the optimal control satisfies G (ū) = 0, whereū ∈ U is the optimal control and G (ū) is the Fréchet derivative of G atū. With the definition of Fréchet derivative, the linearity of expectation operator E and the operator S, one can deduces the first order necessary condition where S * is the adjoint of operator S. For the box-constrained case (BCC): The bounded optimal control satisfies the following lemma, 20]). Suppose G : U ad → R is Gateaux-differentiable and U ad is nonempty and convex. Ifū is an optimal control of the problem min u∈U ad G(u), thenū ∈ U ad andū satisfies the variational inequality With lemma 2.4 and the expression of G (u) in (12), the first order necessary condition of (P ) under box-constrained case can be summarized as 3. Numerical approximation. To compute numerical solutions of the random optimal control problem (P ) in (9), both a random space approximation and a physical space finite element discretisation are needed. In order to formulate a computationally efficient scheme, we introduce a multi-modes expansion (MME) preprocessing technique which will be described in detail later to approximate random variables. Do not like the stochastic collocation method or standard Monte Carlo method, the MME technique simplifies the random space without solving tedious and messy optimized system. The essential feature of MME is to transfer the random field from the coefficients to the right hand side functions. And the advantages of this technique will be more clear after the MME approximate optimization problem is discretized by FEM. Moreover, With an equation about the stochastic state variable the discrete stochastic PDE-constrained optimization problem can be recast as an almost deterministic optimization problem with a mathematical expectation of random coefficients in the cost functional. Next, using Monte Carlo method we can compute the mathematical expectation without spending too many computing resources.

3.1.
Multi-modes expansion approximation scheme(MME). Recall that y denotes the random state variable, and also it is the real solution to equations (1)- (2). And y has the following multi-modes representation as a power series of the perturbation parameter.
is given by (6), the random state variable y has the following multi-modes expansion in terms of powers of the perturbation parameter ε, where y q ∈ Y, q = 0, 1, 2, · · · .
The proof of lemma 3.1 can be found in Feng's paper [15]. Actually, we are more interested in the finite dimensional truncation of the multimodes expansion forms, which is The truncation error of y and y Q will be stated later.
Substituting the expansion of y Q for y in the equation (1) and boundary condition (2), and match the coefficients of ε q order terms for q = 0, 1, 2, · · · , Q − 1 it follows that −∇ · (a 0 (x)∇y 0 (x)) = 0, with the boundary conditions for each mode function y q as It is easy to find that the first equation in (16) is irrelevant to randomness, and y 0 (x) is determinate. And we remind that in the following text y 0 (x, ξ) always refers to y 0 (x).
Let V = H 2 0 (D), and relative finite element weak formulations to equations (15)-(17) are defined as follows: Find y Q (·, ξ) ∈ Y such that According to the weak formulations, the "MME solution operator" S Q : L 2 (∂D) → L 2 (H 2 (D); Ω) is defined by where u is the Dirichlet boundary function, and y Q is the relative multi-modes expansion truncation solution solved by the above weak formulations. Obviously, S Q is a linear operator respect to u. Using lemma 2.2 we can get the following theorem about the convergence analysis between the real solution and the truncated solution.
which can also be writed as Similar proof can be found in [15]. Then, with the "MME solution operator" S Q , we can rewrite the optimal control problem (P ) as where the definition of functional F can be found in (10).

3.2.
Finite element discretization. In this part, we deduce the variational discretization for problem (P Q ). Suppose Let the set of edges under T h be denoted by E h = e∈E h e.
Associated with the triangulation, we consider the finite element space where P 1 denotes the linear polynomial space. The discrete control constraint set is given by Denote by {x l } N l=1 and {φ j } N j=1 the nodal points and the corresponding basis For any ξ ∈ Ω, the FEM interpolations of y Q (x, ξ) in state equations (18) is: with y j (ξ) = y Q (x j , ξ) for j = 1, 2, · · · , N .
with y 0,j = y 0 (x j ) for j = 1, 2, · · · , N . And for any ξ ∈ Ω, the FEM interpolations of y q (x, ξ) in equation (19) is: with y q,j (ξ) = y q (x j , ξ) for j = 1, 2, · · · , N and q = 1, · · · , Q − 1,. We also use the same basis functions to express the FEM interpolation of boundary control function u h (x), that is where Then the finite element scheme for the weak formulations (18) where y 0,h ∈ W hu and y q,h (x, ξ)(q = 1, 2, · · · , Q − 1) ∈ V h for fixed ξ ∈ Ω satisfy the following equations, Notice that the discrete control variable u h is implicit in the solution space W hu that y 0,h belongs to, which makes y Q h (x, ξ) in (26) related to u h . And (26)-(27) can be summarized as (b) For any given u h ∈ U h , suppose y Q and y Q h are the solutions in equations (18)- (19) and equations (26)- (27) relate to u h respectively, which means y Q = S Q (u h ) and y Q h = S Q h (u h ). Then we have the error estimate as From the inequalities (28), we can deduce that Combining theorem 3.2 with inequality (29), we get the error estimate between S and S Q h .
Now, the stochastic optimal control problem (P Q ) in (21) has a discretization scheme as Here, we give the one-order necessary optimality conditions of (P Q h ) which will be used later, BCC: in whichū h is the optimum solution of the problem (P Q h ). For simplicity of calculation, we rewrite the FEM functions to relative vector forms. Suppose there is a vector function R(x) = [φ 1 (x), φ 2 (x), · · · , φ N (x)]. Then from (22) the function y Q h in (26) can be written in the following form where where y 0,h = (y 0,1 , · · · , y 0,N ) T ∈ R N , where y q,h (ξ) = (y q,1 (ξ), · · · , y q,N (ξ)) T ∈ R N , ∀ξ ∈ Ω.

3.3.
A determinant optimal control problem derived from (P Q h ). In this part, we will show that the problem (P Q h ) in (32) can be approximated to a discrete optimal control problem with determined state and control variables. The key to this transformation lies in equality between the random state variable y Q h (x, ξ) and the determined finite dimensional variable y 0,h . First, let us study on the relationship between y Q h (ξ) and y 0,h . In equations (40), due to the singularity of stiffness matrix A it holds y q,h (ξ) = A −1 C(ξ) q y 0,h for q = 1, · · · , Q − 1.
Then bringing (41) into (39), we can compute that where H Q (ξ) is a random matrix defined as Notice the equality (42) that the random field and physical space in y Q h (ξ) are separated into a random matrix H Q (ξ) and a a non-random variable y 0,h .
With the help of equality (42), we are ready to get the new optimal control problem. By substituting where In the above equation, randomness only exists in E H Q (ξ) T R(x) T R(x)H Q (ξ) and E H Q (ξ) T which can be estimated approximately by Monte Carlo simulation at the very beginning.
Next we list some theoretical conclusions of Monte Carlo simulation in order to get further global error estimates of our algorithms.
Suppose {ξ i } M i=1 are arbitrary independent, identically distributed samples in the probability space (Ω, F, P ). For given v(x, ξ) ∈ Y , define an approximation of the expectation And we can find the following estimate in [2], Applying the Monte Carlo simulation (44) to the expanded functional F h (y 0,h , u h ), we get an equivalent optimal control problem (P Q h ) to (32) where Compared to the stochastic optimal control problem (P Q h ) in (32), the new problem (P Q h ) possesses a simpler composition with deterministic state variable, control variable, and state equation. Although these two problems have different formation, they are almost equivalent. Major change is the substitution of mathematical expectation for its Monte Carlo approximation. The whole transformation from (P Q h ) to (P Q h ) is carried out under the equality (42). And most importantly the control variables are identical, which leads to the similar first order necessary conditions between the two problems.
The following proposition reveals the first order necessary condition of problem (P Q h ), which is derived by substituting the Monte Carlo simulation E M for the mathematical expectation E in (33) and (34). Proposition 1. Problem (P Q h ) admits a unique FEM optimal solution (ȳ 0,h ,ū h ) withȳ 0,h = R(x)ȳ 0,h ,ū h = R(x)ū h , and the optimum controlū h satisfies the following one order necessary condition: BCC: 4. The ADMM for (P Q h ) under unconstrained and box-constrained constrains. In this section, we shall solve the optimal control problem (P Q h ) in (46) by an alternating direction method of multipliers (ADMM). The origin of ADMM can be traced back to the 1950s for nonlinear variational problems [17]. In recent years, it is mainly applied to sparse signal recovery, image restoration, compressed sensing, machine learning and so on [29,31]. Besides, ADMM is an efficient numerical algorithm for convex optimization problem, which combines the benefits of dual decomposition and augmented Lagrangian methods. Relative theoretical results of ADMM can be found in [6,7,12,18,19].
The advantages of ADMM for solving the PDE-constrained optimization (P Q h ) can be summarized as follows: (1) ADMM enjoys a global convergence with a convergence rate of O( 1 k ), where k denotes the number of iterations, compared with the local convergence method such as Newton's method or SQP method. (2) ADMM makes use of the character that the control variable and the state variable are decoupled in the cost functional, which will reduce the computational cost. (3) The stiff matrix of left hand side is invariant in each iteration, so that we only need to calculate the inverse matrices for one time in the whole process which reduces the computing time of solving the algebraic system sharply [33].
Next we will present two ADMM algorithms for unconstrained case and boxconstrained case separately. 4.1. Unconstrained case. Because the cost functional in (46) has variables separable structure, it can be divided into two irrelevant items. Let Then the augmented Lagrangian function for problem ( P Q h ) is given by where λ h ∈ V h is the Lagrange multiplier and ρ is the penalty parameter. Similarly, the problem ( P Q h ) is equivalent to the saddle-point problem min Let τ be the tolerance error and τ k = u k+1 h − λ k h be the iterative error at k-th step. Then the ADMM for solving (54) under unconstrained case is given by the following algorithm 1.
Algorithm 1: ADMM for UC. begin One of the advantages for applying ADMM to the optimal control problem (P Q h ) is that y k+1 0,h , u k+1 h , λ k+1 h are easy to figure out in each iteration step. By simple calculation, we obtain Secondly, to get the ADMM form under the box-constrained case, we regard y 0,h and x h as a global variable ω = (y 0,h , x h ). Thus the discrete optimal control problem (46) under box-constrained case can be rewritten as Then the equivalent saddle-point problem is , therefore the two block ADMM algorithm for the saddle-point problem (54) is Algorithm 2: Two block ADMM for BCC.

begin
Set the initial values u 0 h , λ 0 1h , λ 0 2h ; while τ k 2 < τ 1 do y k+1 0,h = arg min The sub-optimal problems in algorithm 2 can be computed directly without it- min(u b , v)). Then, the fomulas in each iterative step of algorithm 2 turns to be y k+1 We finish this part by giving the convergence rate of the general ADMM algorithm. Because algorithm 1 and algorithm 2 have essentially the same structures, they share common convergence results.
Both algorithms are contractive and they satisfy the following estimate v k+1 where v 0 h = (u 0 h , λ 0 h ) is the initial value of the ADMM. Remark 1 ([8]). Letū h be the real solution of the problem (P h ) in (46), and u k h be the numerical solution after k steps iteration in both algorithm 1 and algorithm 2. Then in general case, Lemma 4.1 implies that the theoretical convergence rate for the ADMM is where H is some given matrix related to B and ρ (concrete definition can be found in [8]). The computational results always show faster convergence rate than O(1/k) (see numerical simulations in Section 6).
5. Convergence analysis. This section provides some error estimates for the proposed algorithms in the foregoing discussion. Supposeū,ū h , u k h denote the optimum control for the problem (P ) in (9), the real solution for the problem (P Q h ) in (46), and the iterative solution for algorithm 1 and algorithm 2. The ultimate goal is to derive the global error estimate ū − R(x)u k h L 2 (∂D) for algorithm 1 and algorithm 2. For this purpose, firstly we will show the discretization error estimate for ū −ū h L 2 (∂D) , whereū h = R(x)ū h is the corresponding FEM function ofū h . Then with the existing ADMM convergence rate (56), the global error estimate is derived.
5.1. Discretization error estimates. Let us start with the unconstrained case. Similar error estimates have been discussed in our earlier article [26], so we draw the following conclusion directly.
Theorem 5.1. In algorithm 1 for the optimal control problem (P Q h ), let M be the number of samples for MC simulation, h be the FEM mesh size, ε be the magnitude of the random perturbation, and Q be the numbers of MME expansion. Then, algorithm 1 enjoys the following conclusion, where K 1 and K 2 are constants independent of M and h, and 0 < σ ≤ 1.
Proof. With the help of first order necessary conditions (12) for (P ) and (47) for (P Q h ) under unconstrained case, lemma 3.3, the Monte Carlo error estimate in (45), we can get the estimate. For a rigorous proof the reader is referred to [26].
Next, the box-constrained case will be considered. For ease of understanding, we transcribe the one order necessary conditions (13) for (P ) and (48) for (P Q h ) under box-constrained case below, To estimate ū −ū h L 2 (∂D) from (58) and (59), common skill is to insert u =ū h in (58), and if we can also take v h =ū in (59), the estimation will be trivial to carry out by adding these two inequalities together. Unfortunately, this can not be realized becauseū does not belong to U h ad . Therefore, a discrete controlû h ∈ U h ad will be constructed to approximateū. Especially, the variableû h should satisfies G (ū)(ū −û h ) = 0 for subsequent estimates where the functional G is defined in (11). Following the idea in [10] ,we construct the discrete controlû h ∈ U h ad witĥ u h (x j ) = u h,j for every node x j ∈ ∂D as where The following lemma reveals thatû h satisfies some requirements.
With the help of the functionû h we can now prove the following theorem.
Theorem 5.3. Suppose M is the number of samples for MC simulation, h is the FEM mesh size, ε is the magnitude of the random perturbation, and Q is the numbers of MME expansion in the optimal control problem (P Q h ) in (46). Then, for the problem (P Q h ) under box-constrained case it has the following conclusion, where p ≥ 2.
Proof. Insertū h ⊂ U ad andû h ∈ U ad h into (13) and (48) respectively, then we have For simplicity, we omit the subscript of the inner product ·, · U ,U in the following. Then, by adding these two inequalities and re-ordering the result it gets We divide the right hand of the above inequality in four terms and estimate them step by step. For the first term I, it follows from the estimate (45), the Cauchy-Schwarz inequality, and the triangle inequality that For further estimation of I, we should use the following inequality Under the above inequality, the estimate (30) when j = 4, and the approximation property (61) it can be deduced that For the second term II, it turns to be Next, using the Cauchy-Schwarz inequality, the error estimate (31), the estimate of S in (8), and (30) when j = 2, it is easy to check that The rest is to estimate IV. Before proceeding further, let us see an important equation which comes from the second property ofû h in Lemma 5.4.
With the above equation, we are now turning to the following estimation.
Substituting the estimates for I, II, III, IV into the inequality (65), we arrive at With Young's inequality, it turns to be By rearranging the above inequality, we get which completes the proof.

5.2.
Global error estimate. Finally, we summarise the above conclusions and arrive at the following global error estimate of the control variables.
Lemma 5.4. Letū be the optimum control for the problem (P ) in (9) under either the unconstrained case (UC) or the box-constrained case (BCC). And u k h is the optimal iterative solution for both algorithm 1 and algorithm 2. It has the following global error estimate where C is a generic constant, and s = 1 + σ, for UC; 1 − 1 p , for BCC. with 0 < σ ≤ 1 and p ≥ 2 .
The above lemma can be easily proved by theorem 5.3, theorem 5.3, remark 1 and the triangle inequality We are going to solve the following optimal control problems constrained by random Poisson equations, In the following, we will show the convergence rates of FEM, ADMM, and MME for algorithm 1 and algorithm 2 separately. As it is commonly used in other articles, we choose the random variable to be uniformly distributed in [−0.5, 0.5]. The magnitude of the random fluctuation ε will set to be different values in the following tests. First, we will get the mean value of coefficient by Monte Carlo method with 1000 samples (which can guarantee that the error of MC is around 3.2 × 10 −3 ) and discretize the problem by continuous piecewise linear finite elements. The resulting deterministic finite dimensional optimization problem is solved with ADMM. We use classical ADMM for the unconstrained case, and two-block ADMM for the box-constrained case. Test 1: The convergence rates of FEM We start by examining the convergence rates of FEM numerically. The FEM mesh sizes are set to be h = 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 respectively. Note that for this problem we do not have an exact solution, therefore the error is computed with respect to a reference numerical solution derived under h = 1/128. Let the iterative steps k ≈ 200 in both unconstrained case and box-constrained case to guarantee that the error of ADMM is about 10 −4 . The perturbation magnitude ε = 0.1, and we use three modes (i.e., Q = 3) in MME approximation. Figure 1 gives the FEM convergence rates of the numerical solutions u k h and y k h to the algorithm 1 (left) and algorithm 2 (right) correspondingly. From figure 1, we can find that the FEM convergence rate of u in unconstrained case is approximate to order 2, and the FEM convergence rate of u is about order 1.5, which are both higher than the theoretical results.
Test 2: The convergence rates of ADMM Consider the convergence rates of ADMM with H-norm numerically in both cases. The perturbation magnitude ε is setted to be 0.1. In figure 2, three modes in MME are used. And the FEM mesh size is fixed to be h = 1/64 to guarantee the ADMM errors ū h − u k h H being the major error source. Figure 2 shows the ADMM errors for two algorithms with respect to the k-iteration in log-scale. In these results we can observe that the ADMM convergence rate is much faster than O( 1 √ k ) in algorithm 1. This is possible, since the control in the model problem under unconstrained case is smooth. Comparing to algorithm 1, the ADMM convergence rate is lower in algorithm 2 because of the non-smoothness of the control variable u under the box-constrained condition.
Test 3: The convergence rates of MME The goal of this test is to validate the convergence rates of MME numerically with respect to the perturbation magnitude ε and the truncated number Q. The mesh size of the FEM is h = 1/64, and the tolerance error is set to 10 −4 (in which the iterative number of ADMM is about k = 200). First, we fix Q = 3 to observe the convergence rate about ε alone. Figure 3 shows the convergence rates is about order 3 which is consistent with the theory. Then, fix ε = 0.1 to see the error of control variable u under different modes. Table 1 gives the results in which we can find that the bigger the number of modes is, the lower is the error of u. But, it also cost more CPU time. So we just using three modes for the two algorithms. 7. Conclusion. In this paper, we presented two numerical algorithms for solving optimal control problems constrained by random Poisson equations under unconstrained condition and box-constrained condition. Both algorithms adopt MME technique for the random space and FEM for physical space discretization. Then under a special state variable substitution (42), the random constrained control problem can be approximated to a deterministic discrete optimization problem. Further in algorithm 1 and algorithm 2, we use ADMM and two-block ADMM to solve the fully discretized problem respectively. A detailed analysis is carried out for the error estimation of the discrete optimal solution under some mild assumptions on the random input data. Because it only needs to solve a deterministic constrained control problem, our algorithms can reduce the memory consumption sharply, especially for high-dimensional and large-scale problems which require a great number of samples.