RECEDING HORIZON CONTROL FOR THE STABILIZATION OF THE WAVE EQUATION

. Stabilization of the wave equation by the receding horizon framework is investigated. Distributed control, Dirichlet boundary control, and Neu- mann boundary control are considered. Moreover for each of these control ac-tions, the well-posedness of the control system and the exponential stability of Receding Horizon Control (RHC) with respect to a proper functional analytic setting are investigated. Observability conditions are necessary to show the suboptimality and exponential stability of RHC. Numerical experiments are given to illustrate the theoretical results. asymptotic stability,


1.
Introduction. In this work we deal with the stabilization of the wave equation within the scope of Receding Horizon Control (RHC) y − ∆y = 0, where y = y(t, x) is a real valued function of real variables t and x, andÿ stands for the second derivative with respect to time. Our RHC acts on either a part of the domain or within Dirichlet or Neumann boundary conditions. The stabilization problem for the wave equation has been studied extensively by many authors, see for instance [2,23,26,34,38,47,50] and the references cited therein. In these contributions the stabilization problem is obtained by means of a proper choice of a feedback control law, and only few of them provide numerical results. In this work, we use a control law which rests on the solutions of a sequence of open-loop optimal control problems governed by the wave equation on finite intervals. To study the open-loop optimal control problems for the wave equation, numerically and analytically, we refer to [14,24,25,32,33,45,46].
where, similar to the above case, Ω ∈ R n is a bounded domain with smooth boundary ∂Ω. Moreover, the two disjoint components Γ c , Γ 0 are relatively open in ∂Ω and int(Γ c ) = ∅. 3. Neumann control: In this case, we are dealing with the following one-dimensional wave equation with a Neumann control action at one side of boundary − yxx = 0 in (0, ∞) × (0, L), y(·, 0) = 0 in (0, ∞), yx(·, L) = u in (0, ∞), (y(0, ·),ẏ(0, ·)) = (y 1 0 , y 2 0 ) in (0, L), where L > 0. By denoting Y(t) := (y(t),ẏ(t)), and choosing an appropriate control space U, each controlled system in the above cases can be rewritten as a first order controlled system in an abstract Hilbert space H: where for each case, the state space H, the unbounded operator A, and the control operator B will be specified appropriately below, compare also e.g., [36,49,51]. In particular it will be guaranteed that for every T > 0 and u ∈ L 2 (0, T ; U), there exists a unique solution Y ∈ C 0 ([0, T ]; H) to (AP) which satisfies the estimate where the constant c est is independent of Y 0 and u. Now we can reformulate our infinite horizon problem as the following problem inf{J ∞ (u; Y 0 ) | (Y, u) satisfies (AP), u ∈ L 2 (0, ∞; U)}, where the incremental function : H × U → R + is given by and β is a positive constant. To deal with the infinite horizon problem (OP ∞ ), one can employ the algebraic Riccati equation, see, e.g., [27,37,39]. But for the case of infinite-dimensional controlled systems, discretization leads to finite-dimensional Riccati equations of very large order and ultimately one is confronted with the curse of dimensionality. Model reduction techniques do not offer an efficient alternative either. In fact, the transfer function corresponding to the controlled system (2)-(4) has infinitely many unstable poles and thus, the model reduction based on balanced truncation will not produce finite H ∞ −error bounds, see, e.g., [16]. An alternative approach to deal with (OP ∞ ) is the receding horizon framework. In this framework, the stabilizing control, namely, RHC is obtained by concatenation of a sequence of open-loop optimal controls on a sequence of overlapping temporal intervals. Further, the process of generating the sequence of intervals and concatenation are carried out in such way that the resulting control has a feedback mechanism and is defined on the whole of the interval [0, ∞). Indeed, the receding horizon framework bridges to a certain degree the gap between the openand closed-loop control. In the past three decades, numerous results have been published on RHC for finite-dimensional systems, among them we can mention [13,20,22,29,44,48] and the references therein. More recently, some authors have addressed the case of infinite-dimensional systems as well [3,21,28]. Here we employ the receding horizon framework which was proposed in [48] for finite-dimensional controlled systems, and in [3] for infinite-dimensional controlled systems. In this framework, neither terminal costs nor terminal constraints are imposed to the subproblems in order to guarantee the stability of RHC. But rather, by defining an appropriate sequence of overlapping temporal intervals and applying a suitable concatenation scheme, one can ensure the stability and also suboptimality of RHC. In the previous work [3], this RHC was applied for the stabilization of the Burgers equation with different boundary conditions. In addition, based on a stabilizability condition, the asymptotic stability and suboptimality of RHC were investigated. In the present work, we investigate the suboptimality and exponential stability of RHC for all the cases 1-3 of the wave equation with respect to an appropriate functional analytic setting. The key properties are the observability conditions which were not available for the Burgers equation in [3]. By help of these conditions, we obtain not just asymptotic stability but also exponential stability of RHC.
Turning to the receding horizon approach, we choose a sampling time δ > 0 and an appropriate prediction horizon T > δ. Then, we define sampling instances t k := kδ for k = 0 . . . . At every sampling instance t k , an open-loop optimal control problem is solved over a finite prediction horizon [t k , t k + T ]. Then the optimal control is applied to steer the system from time t k with the initial state Y rh (t k ) until time t k+1 := t k + δ at which point, a new measurement of state is assumed to be available. The process is repeated starting from the new measured state: we obtain a new optimal control and a new predicted state trajectory by shifting the prediction horizon forward in time. The sampling time δ is the time period between two sample instances. Throughout, we denote the receding horizon state-and control variables by Y rh (·) and u rh (·), respectively. Also, (Y * T (·; Y 0 , t 0 ), u * T (·; Y 0 , t 0 )) stands for the optimal state and control of the optimal control problem with finite time horizon T , and initial function Y 0 at initial time t 0 . Next, we summarize these steps in Algorithm 1.

Algorithm 1 Receding Horizon Algorithm
Require: Let the prediction horizon T , the sampling time δ < T , and the initial point (y 1 0 , y 2 0 ) ∈ H be given. Then we proceed through the following steps: 1: k := 0, t0 := 0 and Y rh (t0) := (y 1 0 , y 2 0 ). 2: Find the optimal pair (Y * 1.1. Stability and Suboptimality of RHC. Throughout this paper, we use the following definitions: Definition 1.1 (Value function). For every pair (y 1 0 , y 2 0 ) =: Y 0 ∈ H, the infinite horizon value function V ∞ : H → R + is defined as Similarly, the finite horizon value function V T : H → R + is defined by In order to show the exponential stability and suboptimality of the receding horizon control obtained by Algorithm 1, we need to verify the following properties: Since, in Algorithm 1, the solution of (OP ∞ ) is approximated by solving a sequence of the finite horizon open-loop optimal controls, one needs, apriori, to be sure that any of these optimal control problems in Step 2 of Algorithm 1 is well-defined: P1: For every Y 0 ∈ H and T > 0, every finite horizon optimal control problems of the form admits a solution. Moreover, we require the following properties for the finite horizon value function V T : P2: For every T > 0, V T has a quadratic growth rate with respect to the H-norm.
That is, there exists a continuous, non-decreasing, and bounded function γ 2 : P3: For every T > 0, V T is uniformly positive with respect to the H-norm. In other words, for every T > 0 there exists a constant γ 1 (T ) > 0 such that we have and the proof is complete.
Remark 1.6. It is of interest to derive the exponential decay inequality (13) in an alternative way as in the above. In particular, the constants ζ and c can be estimated in a different manner. Namely, due to [3,Theorem 7], there exists a T * > 0 such that for every T ≥ T * we have where the constants c and ζ are given by ).
Here θ 1 (T, δ), θ 2 (T, δ) are defined as in Lemma 1.3. Using properties P2 and P3 we obtain where θ 1 , θ 2 are defined as in Lemma 1.3 and T * is chosen such that α(T * ) > 0 holds. Now since γ 2 (T ) is a bounded function and δ is fixed, we have Therefore, asymptotically the RHC strategy is optimal.
The rest of paper is organized as follows: Sections 2, 3, and 4 deal, respectively, with the cases in which RHC enters as a distributed control, a Dirichlet boundary condition, and a Neumann boundary condition. In each of these sections, first wellposedness of the finite horizon optimal control problems (i.e., property P1) and the corresponding optimality conditions are investigated. Then, relying on observability conditions, properties P2 and P3 are analysed. Finally, in Section 5, we demonstrate numerical experiments in which Algorithm 1 is implemented for each type of the control actions. In addition, for each example the performance of RHC is evaluated and compared for different choices of the prediction horizon T and a fixed sampling time δ.
2.1. On the finite horizon optimal control problems. For our subsequent work we need to gather some facts on the finite horizon optimal controls of the form (OP T ) given by over all u ∈ L 2 (0, T ; L 2 (ω)), subject to where (y 1 0 , y 2 0 ) ∈ H 1 and the incremental function is defined by (21). Property P1 is verified by means of the following proposition.
Proof. For the proof we refer to [40].
In following we derive the first-order optimality conditions for (P dis ). Due to the presence of the tracking term for the velocityẏ in the performance index function of (P dis ), we will see that the solution of adjoint equation exists in the very weak sense.
Proposition 2.6. Let (ȳ,ū) be the optimal solution to (P dis ). It satisfies the following optimality conditions Proof. The proof is given in Appendix A.1.

2.2.
Verification of P2 and P3. In this subsection we deal with the verification of properties P2 and P3. Concerning this matter, we recall some aspects on the stabilizability of the wave equation with a distributed feedback law. Specifically, we consider the following controlled system with the feedback control u given by u(y) := −a(x)ẏ, where the function a ∈ L ∞ (Ω) satisfies a 1 ≥ a(x) ≥ a 0 > 0 for almost every x ∈ ω, and a(x) = 0 in Ω\ω.
The following observability conditions will be used later.
To specify the required observability conditions, for any (φ 1 0 , φ 2 0 ) ∈ H 1 we denote by φ the weak solution of the following system Then we can formulate the following observability conditions: OB1: There exists T ob1 > 0 such that for every T ≥ T ob1 , the weak solution φ to (29) where the positive constant c ob1 depends only on T and ω ⊆ Ω. OB2: There exists T ob2 > 0 such that for every T ≥ T ob2 , the weak solution φ to (29) where the positive constant c ob2 depends only on T and Γ c ⊆ ∂Ω. The observability conditions OB1-OB2 are satisfied if and only if the Geometric Control Conditions (GCC) hold (see, e.g., [8,11]). Roughly speaking, GCC for (Ω, ω, T ob1 ) (resp. (Ω, Γ c , T ob2 )) states that all rays of the geometric optics should enter in the domain ω (resp. meet the boundary Γ c ) in a time smaller than T ob1 (resp. T ob2 ). For a comprehensive study, we refer to Reference [8].
The following equivalence is frequently mentioned in the literature. Since it is not straight forward to find a proof, we provide the arguments here.
Proof. First we show that OB1 implies exponential stabilizability. We set u(y) := −aẏ in (27). In this case the resulting closed-loop system is well-posed (see, e.g., [12]) and for its unique weak solution we have ). Now, for an arbitrary T > 0 consider the following controlled system By taking the L 2 -inner product of (31) withẏ, and integrating over [0, T ], we obtain the following estimate Now by using a density argument and passing to the limit, it can be shown that the inequality (32) is also true for the weak solution of (31) with the initial data (y 1 0 , y 2 0 ) ∈ H 1 . Moreover the solution y of (31) can be expressed as y : By the observability condition OB1, and estimate (24) for (33) we have for a constant c 1 > 0 which is independent of (y 1 0 , y 2 0 ). By (32), (34) we obtain , we have for every k = 1, 2, . . .
and, as a consequence, for every t ∈ [kT ob1 , (k + 1)T ob1 ] we infer that Thus we conclude (30). Next we show that the stabilizability property (30) implies the observability condition OB1 for (29) with an arbitrary initial pair (y 1 0 , y 2 0 ) ∈ H 1 . Setting u(y) := −aẏ in (27) with a ∈ L ∞ (Ω) satisfying (28), taking the L 2 -inner product of (27) withẏ, and integrating over [0, t] for t > 0, we obtain where a 1 is specified in (28). Further by (30), for a large enough T > 0 we have Moreover, the solution φ to (29) with the initial pair (y 1 0 , y 2 0 ) can be rewritten as φ := y − ψ, where y is the weak solution to (31) and ψ is the weak solution to (33) for T instead of T . Now assume that the solution of (33) is smooth enough. Taking the L 2 -inner product of (33) withψ and integrating over [0, T ] we have By using a density argument and passing to the limit, it can be shown that the inequality (37) is also true for the weak solution of (33) with −aẏ as a forcing function. Moreover, (37) implies Note also that

BEHZAD AZMI AND KARL KUNISCH
Combining (36), (38), and (39), we complete the proof with Now we are in the position that we can investigate properties P2 and P3.
Proposition 2.8. Suppose that the observability condition OB1 holds. Then for every T > 0, there exists a controlû ∈ L 2 (0, T ; L 2 (ω)) for (26) such that for every initial pair (y 1 0 , is a nondecreasing, continuous, and bounded function. Moreover, there exists a constant (26), and using Proposition 2.7 for the choice Here the constants M and α were defined in Proposition 2.7. Integrating from 0 to T implies that By the definition of the value function V T we have and thus (40) holds.
To verify (41), we use the superposition argument for equation (26) with an arbitrary control u ∈ L 2 (0, T ; L 2 (ω)). We rewrite the solution of (26) as y = φ + ϕ where φ is the solution to (29) with the initial pair (y 1 0 , y 2 0 ) instead of (φ 1 0 , φ 2 0 ), and ϕ is the solution to the following equation By OB1 for (29) with the initial pair (y 1 0 , y 2 0 ) and ω replaced by Ω, and (24) for (42), we obtain Since u ∈ L 2 (0, T ; L 2 (ω)) is arbitrary, we obtain (41) for a constant c 1 (T ) independent of u and (y 1 0 , y 2 0 ). Remark 2.9. Thus from Propositions 2.5 and 2.8, we conclude that Theorem 1.5 is applicable for the wave equation with distributed control and guarantees the exponential stability of RHC obtained by Algorithm 1.
Proof. The proof is similar to the one of Theorem 2.1 in [46].
Next we specify the first-order optimality conditions for (OP dir ). Since the objective function in (OP dir ) involves the tracking term of the velocityẏ in the space L 2 (0, T ; H −1 (Ω)), the solution to the adjoint equation gains more regularity than the one to (48) and this solution exists in the weak sense.
Proposition 3.5. Let (ȳ,ū) be the optimal solution to (OP dir ). It satisfies the following optimality conditions is the solution of the adjoint equation.
Proof. The proof is given in Appendix A.2.

Verification of P2 and P3.
Similarly to the previous section, we show first that there exists a feedback law u(y) that stabilizes the controlled system (3) with respect to the energy which is defined along a trajectory y.
Lemma 3.6. The observability condition OB1 is equivalent to the following observability inequality: OB3: For every T ≥ T ob1 , the very weak solution φ to (29) with (φ,φ) ∈ C 0 ([0, T ]; H 2 ) satisfies the inequality where the constants c ob1 , T ob1 have been defined in the observability condition OB1.
Similarly, the observability condition OB2 is equivalent to the following observability condition: OB4: For every T ≥ T ob2 , the very weak solution φ to (29) with (φ,φ) ∈ C 0 ([0, T ]; H 2 ) satisfies the inequality where the constants c ob2 , T ob2 have been defined in the observability condition OB2.
for positive constants M , α independent of (y 1 0 , y 2 0 ), if and only if the observability condition OB2 holds.
Proof. The proof of the first direction in the equivalence can be found in, e.g., [1]. Nevertheless, we provide here a proof for completeness. First assume that condition OB2 holds. We show the exponential decay inequality (52).

H2
for all t ∈ [0, T ], where the constants M and α were defined in Proposition 3.8. By integrating from 0 to T we have T 0 (y(t),ẏ(t)) 2 Moreover, by (52) and (54) we have Using (43), (61), and the definition of the value function V T , we have which gives (59).
We will later use the following auxiliary problem.
It remains to show that y * is the weak solution to (64) corresponding to the control u * . To see this, we only need to pass to the limit in the weak formulation (65) for the pair of sequences (y n , u n ). For every Moreover, due to (66), for every t ∈ [0, T ] the sequence {ẏ n (t)} n is bounded in L 2 (0, L). Hence, it has a weakly convergent subsequenceẏ n (t) ȳ t with limit y t ∈ L 2 (0, L). For any t ∈ [0, T ], we define I t : This operator is continuous, moreover, for every q ∈ V we have where I * t : V → (H 1 (0, T ; V * )) * is the adjoint operator to I t . Therefore, for every and, as a consequence, we can pass to the limit in (65) with y replaced by y * and y * is the weak solution to (64) corresponding to the control u * . Now since the solution operator S : L 2 (0, T ) → L ∞ (0, T ; H 3 ) defined by u → (y,ẏ) is affine and continuous, the objective function J T (·; y 1 0 , y 2 0 ) is weakly lower semi-continuous and we have 0 ≤ JT (u * ; (y 1 0 , y 2 0 )) ≤ lim inf n→∞ JT (u n ; (y 1 0 , y 2 0 )) = σ.
We turn to the first-order optimality conditions for (OP neu ). Due to the presence of the tracking term for the velocityẏ ∈ L 2 (0, T ; L 2 (0, L)) in the objective function of (OP neu ), the solution to the ajdoint equation has less regularity than the one to (64) and exists in the very weak sense only. Proof. The proof is given in Appendix A.3.

(72)
Then we formulate the following observability inequalities:

Numerical Experiments.
This section is devoted to numerical simulations. In order to justify our theoretical results for the receding horizon Algorithm 1, we give numerical results for all the cases: Distributed control, Dirichlet boundary control, and Neumann boundary control. We give also a short description about the discretization of the control and the state, the optimization algorithm, and the implementation of Algorithm 1.

5.1.
Discretization. Among the many discretization approaches to the wave equation based on finite elements, we can mention the works [4,5,6,7,30,31]. Here we follow the framework which was investigated in [7] and applied for optimal control problems in [32]. In this framework, the open-loop problems are discretized, temporally and spatially, by appropriate finite elements, for which the approaches optimize-discretize and discretize-optimize commute; see, e.g., [10]. In all cases, for the discretization of the state we write the equation as a system of first order equations in time. The spatial discretization was done by a conforming linear finite element scheme using continuous piecewise linear basis functions over a uniform mesh. This uniform mesh was generated by triangulation. For the temporal discretization of the state equation, a Petrov-Galerkin scheme based on continuous 24 BEHZAD AZMI AND KARL KUNISCH piecewise linear basis functions for the trial space and piecewise constant test functions was employed. By doing so, the resulting discretized system is equivalent to the system first discretized in space followed by the Crank-Nicolson time stepping method. Since the temporal test functions have been chosen to be piecewise constant, it is natural to also discretize the adjoint equation and also control by these functions. This implies that the approximated gradient is consistent with both continuous functional and the discrete functional. In the case of the Dirichlet boundary control, the inhomogeneous Dirichlet condition y| Γc = u was treated by interpreting u as the trace of a sufficiently smooth functionŷ and solving the equation for v = y −ŷ instead of y with homogeneous Dirichlet boundary conditions, see, e.g., [18, page 376] for more detail.

Optimization.
Every discretized open-loop problem was first formulated as reduced problem. The resulting unconstrained optimization problem consists of minimizing a reduced objective function which depends only on the control variable u. Then these reduced problems were solved by applying the Barzilai-Borwein (BB) method [9] equipped with a nonmonotone line search [17]. The optimization algorithm was terminated as soon as the L 2 (0, T ; U)-norm of the gradient for the reduced objective function was less than the tolerance 10 −6 .
constant T ∞ defined as the final computation time, we ran Algorithm 2 for all the above mentioned cases. For every example, the receding horizon control u rh was computed for the fixed sampling time δ = 0.25 and different values of the prediction horizon T . In each example, the performance of the computed receding horizon controls for different prediction horizons are compared with each other. Moreover, in order to get more intuition about the stabilization problem, the results related to the uncontrolled problem are also reported. As performance criteria for our comparison, we considered the following quantities:  where H 1 = H 1 0 (Ω) × L 2 (Ω). As it is depicted by Figure 1, a single wave propagates and moves from the center of the domain to the boundaries. While moving to the boundaries, it decomposes into several small waves. After hitting the boundaries, the resulting small waves propagate and join together to form a single wave at the center of the domain. This process repeats constantly, as time progresses. We employed RHC computed by Algorithm 2 for different choices of the prediction horizon T and the fixed sampling time δ = 0.25. The corresponding results are gathered in Table 1. Figure 4(a) demonstrates the evolution of the H 1 -energy of the receding horizon states for the different choices of T and fixed δ = 0.25. The evolution of the L 2 (ω)-norm of the corresponding RHCs are plotted in Figure 3. Figure 5 shows the receding horizon state at different time points for the choice of T = 1.5. As expected longer T provides better stabilization performance but requires more iterations.       Table  2, Figure 4(b). Figure 6 shows the receding horizon state at different time points  for the choice of T = 1.5 and δ = 0.25.  Table 3, and Figures 4(c). Figures 7(b) and 7(c) show, respectively, the receding horizon state and control for the choice of T = 1.5.
From Tables 1-3 and Figures 4(a), 4(b), and 4(c), we can assert that the results corresponding to the performance criteria are reasonable. Except for the case that  Conclusion. Receding horizon control for the stabilization of linear wave equation with different boundary conditions was analysed and its numerical efficiency was investigated. The results encourage further investigations which may include the convergence analysis of the controls obtained by the receding horizon framework as T → ∞, as well as nonlinear problems, and cost functionals different from quadratic ones, as for instance, sparsity promoting functionals.

A. Appendix.
A.1. Proof of Proposition 2.6. Before establishing the first-order optimality conditions, we prove the following useful lemma.
A.2. Proof of Proposition 3.5. In order to show the optimality conditions, we need first to prove the following useful lemma.