Optimal control problem for a viscoelastic beam and its galerkin approximation

This paper is concerned with the optimal control problem of the vibrations of a viscoelastic beam, which is governed by a nonlinear partial differential equation. We discuss the initial-boundary problem for the cases when the ends of the beam are clamped or hinged. We define the weak solution of this initial-boundary problem. Our control problem is formulated by minimization of a functional where the state of a system is the solution of viscoelastic beam equation. We use the Galerkin method to approximate the solution of our control problem with respect to a spatial variable. Based on the finite dimensional approximation we prove that as the discretization parameters tend to zero then the weak accumulation points of the optimal solutions of the discrete family control problems exist and each of these points is the solution of the original optimal control problem.

1. Introduction. In this paper, we discuss the optimal control problem with nonlinear state equation for a viscoelastic beam in 1D. The equation represents a nonlinear beam model developed by J.Ball in [5]. The equation of motion in the vertical displacement of points of this beam has a form ∂ 2 y ∂t 2 + α ∂ 4 y ∂x 4 − β + γ l 0 ∂y(ξ, t) ∂ξ The parameters α, γ, δ, σ are positive and β, η ∈ R. These physical constants depend on the Young modulus, the cross-sectional area, the cross-sectional second moment of inertia, and the density of material. The position x ∈ (0, l) and the time t ∈ (0, T ) for l, T < ∞. Later on we put assumptions on f . We shall further make clear in which sense this equation is understood.
We consider the initial-boundary value problem consisting of (1), the initial conditions y(x, 0) = y 0 (x) and ∂y(x, 0) ∂t = y 1 (x) (4) and the boundary conditions (2) or (3). Mathematical models for nonlinear beam have a long history. In the bibliography, a number of papers have discussed the problem of the existence and uniqueness of solution to a nonlinear beam equation. For example J.Ball's papers [5,6] and the papers [1,3,11,18,19]. In [2] Authors present existence results for one-dimensional viscoelastic beam with the Signiorini contact condition.
Concerning the control problems governed by PDEs we refer for the basic theory to Lions [17]. Some questions of optimal control problems for the beam equation were studied by M. Barboteu et al. [7], M. Galewski [12], I. Hlavácek and J. Lovisek [13], J. Hwang [14], I. Sadek et al. [20], and by many others. The Galerkin method can be applied to the boundary problems as well as to control systems. Galerkin method in solving the boundary beam systems was investigated in articles [1,5,6,18,19]. Semidiscrete Galerkin approximation of nonlinear control problems was studied for example in [4,16,20,21] and in our papers [8,9,15].
In the present paper, we extend our discussions in [15] from elastic beam to viscoelastic beam. Section 2 establishes some notations and studies the properties of an operator from the control space into the state space. The optimal control problem is studied in Section 3. In Section 4 we consider the Galerkin approximation of our optimal control problem and in Section 5 we prove the theorem of convergence for semidiscretization.

Preliminaries.
Let Ω = (0, l), where l > 0 is the natural length of the beam, S = (0, T ) and Q = Ω × S. We shall need the following spaces: • Lebesgue spaces L 2 (Ω), L 2 (Q), • Lebesgue-Bochner spaces with the standard norms, where W is any Banach space (see [17] p. 108), • Sobolev spaces H 2 (Ω), H 2 0 (Ω), H 1 0 (Ω) with the standard norms. Let V = H 2 0 (Ω) for clamped ends or V = H 2 (Ω) ∩ H 1 0 (Ω) (the closed subspace of H 2 (Ω)) for hinged ends and H = L 2 (Ω). These spaces are equipped with standard norms. The embedding V ⊂ H is dense and compact. Identifying H with its dual we have the evolution triple V ⊂ H ⊂ V * (see [10] p. 391). The duality pairing (see [22] We define a weak solution of the equation (1) with the initial condition (4) and the boundary conditions (2) or (3) (see [6], [14]) as a solution of following variational equation where (ϕ, ψ) = l 0 ϕ(x)ψ(x)dx (the inner product on H). For simplicity we use the notationẏ = dy dt ,ÿ = d 2 y dt 2 and the subscript x denotes the derivative with respect to x. Remark 1. In equation (5) the derivatives of the weak solution of the equation (1) are satisfied in the sense of distribution and the solution y of (5) does not have to satisfy the boundary conditions (3) in any classical sense (see [6]).

(6)
Let us consider a space W = {ω ∈ L 2 (S; H) | ω x , ω xx ,ω ∈ L 2 (S; H)}. We equip W with a classical norm where y = y(u) is the unique solution of (6) for u ∈ U and . In order to prove that the optimal control process exists, we first indicate the continuous dependence on control parameter. Lemma 2.2. Suppose g ∈ L 2 (Q), y 0 ∈ V , y 1 ∈ H and that the operator B : U → L 2 (Q) is linear and bounded. Then the operator F is locally Lipschitz continuous and a weakly continuous map.

ANDRZEJ JUST AND ZDZISLAW STEMPIEŃ
Proof. The proof is divided into three steps.
Firstly, we prove the energy estimates. We note that for equations (6) the system energy is given by This energy expression (7) can be obtained, formally, by choosing ψ =ẏ(t) in (6) and using integration by parts. Integrating (7) over [0, t] for t < T , we arrive at This implies by Schwartz's (see [22] p. 8) and Young's (see [22] p. 36) inequalities that with constants C 1 , C 2 > 0 depending on ||y 0 || V , ||y 1 || H and ||g|| L 2 (Q) . From (8) by Gronwall's lemma (see [10] p. 127) and Poincare's inequality (see [22] p. 59) we obtain Integrating this inequality over the interval [0, T ], we obtain the following estimate Secondly, we shall prove the operator F is a locally Lipschitz's map. Let u 1 , u 2 ∈ U and y 1 = y 1 (u 1 ), y 2 = y 2 (u 2 ) be the unique solution of (6) that is From Theorem 2.1 we know, that the equation (10) for i = 1, 2 has exactly one solution y i ∈ L ∞ (S; V ) andẏ i ∈ L ∞ (S; H) ∩ L 2 (S; V ). Subtracting the two equations (10) we have We take the inner product in (11) withẏ 1 (t) −ẏ 2 (t) as a test function ψ (which is possible sinceẏ i ∈ L 2 (S; V )). Thus Then as in [5] and [6] the nonlinear parts of (12) may be estimated and where C 5 , C 6 > 0 are the constants depending only on the data (u 1 , u 2 , y 1 , y 0 ). Finally, combinating (13) and (14) with (12) we arrive at the following inequality H ] for a.e. t ∈ S and a constant C 7 > 0. (15) Next from (15) by the integration over a subinterval [0, t] of [0, T ] and by applying Gronwall's lemma (see [10] p. 127) and Poincare's inequalitiy (see [22] p.59) we have that This implies that the operator F is locally Lipschitz's map because a constant C 8 > 0 depends only on the data and the operator B is linear and bounded.
Thirdly, we shall prove that the operator F is a weakly continuous mapping. Let (u n ) denote a sequence such that u n → u weakly in U.
Let y n = y(u n ) satisfy the equation (6) with u = u n , i.e.
From Theorem 2.1 we know that the problem (17) has exactly one weak solution y n for n ∈ N . From the assumptions of Lemma and from the first part of this proof we have that the sequences (y n ), (y nx ), (y nxx ), (ẏ n ) and (ẏ nxx ) are bounded in L 2 (S; H).
Using the diagonal procedure we may thus extract subsequence (also denotes by (y n )) such that y n −→ y weakly in L 2 (S; V ) y n −→ẏ weakly in L 2 (S; H) y n −→ y strongly in L 2 (S; H), as proved in [5] and [6] we have also the following convergences ||y nx || 2 y nxx −→ ||y x || 2 y xx weakly in L 2 (S; H) (y nx ,ẏ nx )y nxx −→ (y x ,ẏ x )y xx weakly in L 2 (S; H).
Starting from the first equation (17) we deduce that for all test functions ϕ ∈ C ∞ 0 (0, T ) (see [17] p. 26) we have For the first term in (18), by integration by parts, we have for any function ϕ ∈ C ∞ 0 (0, T ) or (20) for any function ϕ ∈ C 1 (0, T ) such that ϕ(T ) = 0. Using (19) in (18) we obtain Passing to the limit in (21) we obtain (as the operator B is linear) Therefore, by using the distributional derivative in the first term of (22), we deduce that the function y verifies the first equation (6) with u = u, because the functions ψ ∈ V and ϕ ∈ C ∞ 0 (0, T ) are arbitrary. Finally, combinating (20) and (18) and passing to the limit in (18), we obtain that the function y verifies the initial conditions in (6) too.
This completes the proof of this lemma.
Remark 2. Lemma 2.2 is one of the most important result of our paper. It permits to prove the main result of our paper -convergence of solutions of the approximated family control problems to the solution of original control problem.
The optimal control problem (P ) can be formulated as follows: find an optimal pair (u 0 , y 0 ) ∈ U × W which minimizes a functional J(u, y) where J : U × W → R and y = y(u) is a unique solution of (23) for u ∈ U and the space U × W is equipped with a norm For any control u ∈ U , from Theorem 2.1 and (6), there is a unique state y = y(u) = F (u). Then, we can define for the functional J : U × W → R the reduced functional I : U → R as I(u) = J(u, y(u)) for any u ∈ U . Theorem 3.2. Let the assumptions of Lemma 2.2 be satisfied, i.e. g ∈ L 2 (Q), y 0 ∈ V , y 1 ∈ H and assume that the operator B is linear and bounded from the separable Hilbert space U into L 2 (Q). Assume the functional J is continuous and convex on U × W and the reduced functional I is coercive on U . Then, there exists where y 0h and y 1h are the orthogonal projections y 0 and y 1 onto V h with the respective norms. From Theorem 2.1 conclude that for each h ∈ G the equation (25) has the unique solution y h ∈ L 2 (S; V h ) andẏ h ∈ L 2 (S; V h ).
As an approximation of control space U we take a family {U k } k∈K of finite dimensional subspaces of U which satisfies the following conditions: where the set K ⊂ (0, 1] of parameters k has an accumulation point at 0. Our approximated optimal control problem (P hk ) has the following form: find an optimal pair (u 0 k , y 0 hk ) ∈ U k × W h which minimizes the cost functional J i.e.
where y hk = y h (u k ) is the solution of the system and The control problem (P hk ) is the lumped parameter system.  (24) and (26), the approximated control problem (P hk ) has at least one solution u 0 kh ∈ U k . The proof of this theorem can be constructed in the same way as the proof of Theorem 3.2 because the equation (27) has for every u k ∈ U k the unique solution y hk = y h (u k ).

5.
Convergence of approximation. In this section, we prove the main result of our paper, convergence of solutions of approximated optimal control problems (P hk ) to a solution of original problem (P ). Using the same techniques as in the proof of Lemma 2.2 and our assumptions of Galerkin approximation, we may prove Let us now consider the convergence of approximation for problem (P ). Proof. We want to prove that the sequence (u 0 kh ) is a minimizing sequence for the functional I(u) = J(u, y(u)). According to the approximation of the space of controls U for u 0 ∈ U (solution of problem (P )) there exists a sequence (u 0 k ) such that u 0 k ∈ U k for k ∈ K, and u k − −− → and y is the unique solution of (6) for u = u.
Because functional J is convex and continuous then it is weakly lower semicontinuous on U × W, therefore inf u∈U J(u, y(u)) = lim This implies that a pair (u, y) is one of the solutions of the optimal control problem (P ). Corollary 1. Theorem 5.2 also proves the existence of a solution of the control problem (P ) because the solutions sequence of approximated problems (P hk ), obtained with Galerkin method, is one of the minimizing sequences.