OPTIMAL CONTROL OF SYSTEM GOVERNED BY THE GAO BEAM EQUATION

. In this contribution several optimal control problems are mathematically formulated and analyzed for a nonlinear beam which was introduced in 1996 by David Y. Gao. The beam model is given by a static nonlinear fourth-order diﬀerential equation with some boundary conditions. The beam is here subjected to a vertical load and possibly to an axial tension load as well. A cost functional is constructed in such a way that the lower its value is, the better model we obtain. Both existence and uniqueness are studied for the solution to the proposed control problems along with optimality conditions. Due to the fact that analytical solution is not available for the nonlinear Gao beam, a ﬁnite element approximation is provided for the proposed problems. Numerical results are compared with Euler-Bernoulli beam as well as the authors’ previous considerations.

1. Introduction. In this paper we discuss the optimal control problem with nonlinear elliptic state equation in 1D. The equation is not an abstract one but it represents a nonlinear beam model developed by David Y. Gao in [3], [4]. The purpose is to give more details of that problem, derive necessary optimality conditions and by means of the finite element method demonstrate our results on several examples. We start with the classical linear Euler-Bernoulli beam model with axial force because at the end as we want to compare the results for the both models.
2. Optimal control problem for elliptic equations.
Let V, U be suitable function spaces defined on a domain Ω ⊂ R n , let U ad be a bounded, closed, convex subset of U called admissible set. The element u ∈ U ad is called admissible control. Finally, let A denotes an elliptic operator defined on Ω.
Definition 2.1. The problem    For given u ∈ U ad find function y = y(u) ∈ V such that A y(u) = f + u in Ω is called elliptic equation with control by the right-hand side (RHS). Elliptic equation from (1) is usually called state equation and its solution y is called state.

JITKA MACHALOVÁ AND HORYMÍR NETUKA
Besides the classical formulation (1) we will use also its weak form:    For given u ∈ U ad find function y = y(u) ∈ V such that a(y, v) = L(v) ∀v ∈ V, where the bilinear form a and the linear form L are defined as follows and (., .) V denotes scalar product in V .
Remark 1. Let us notice that there are two possibilities for a control: 1. Distributed control : u is defined and acts on Ω (or its part). Furthermore we can distinguish between control by RHS and control in coefficients. 2. Boundary control : u acts only on the boundary of Ω.
Now we proceed to the definition of optimal control problem. Let J : V × U → R be a specified functional which is called objective or criterion or cost functional.
where y(u) solves the given state problem (1) with control value u ∈ U ad (4) is called optimal control problem with elliptic state equation (1).
There are two statements about the existence of solution for this problem. The first one is related to the state equation and it is the well-known Lax-Milgram theorem (see e.g. [2], [5]). For a Hilbert space V , continuous and V -elliptic bilinear form a and continuous form L it admits a unique solution y ∈ V .
Regarding (OCP), the existence of solution requires some special property of the functional J: Definition 2.3. The functional J : V × U → R is said to be weakly lower semicontinuous if y n y, u n u =⇒ lim inf n→+∞ J(y n , u n ) ≥ J(y, u).
Now we could prove the existence theorem: Theorem 2.4. Let J be a weakly lower semicontinuous functional. Let U ad be a bounded, closed, convex subset of the space U . Then (OCP) has at least one solution.

2.2.
Optimality conditions. For our considerations it is very important to know the socalled optimality conditions for (OCP). Here we give a brief overview. First we need to define quite simple but useful concepts: Definition 2.6. Let for each control u there is a unique y = S(u). Then we can define the reduced functional I(u) = J(Su, u) ∀u ∈ U.
As U ad is a convex set, optimization theory gives us (see e.g. [2]) the following: is necessary criterion for u ∈ U ad to be an optimal solution of (OCP).
Here U denotes the dual space to space U and ., . U ×U means duality pairing between U and U .
Unfortunately, direct use of the theorem 2.7 is not possible because and we do not know y (u, z) explicitly. This can be overcome by using the adjoint state technique. Following [5] or [8] we obtain the necessary conditions for the optimal solution {y, u} where A * is the adjoint operator to A , (11) is the adjoint equation and p is the adjoint state.
3. Optimal control of Euler-Bernoulli beam. Our study will be started with the linear state problem given by the classical Euler-Bernoulli beam equation.
3.1. Linear-quadratic elliptic control problem. The standard Euler-Bernoulli beam model reads as follows (see e.g. [7]) where E is the Young's elastic modulus, I is the area moment of inertia, y is the deflection of the beam, L is the length of the beam, q is the applied vertical load.
For the state problem we choose (without loss of generality) the boundary conditions related to the cantilever beam with an axial force P < 0 and the support added at the right end (see Figure 1). The problem now reads as Remark 2. We will consider only the case P < 0, where P causes a tension, whereas P > 0 causes a compression and generally non-unique solution (so-called buckling).
Next we want to set up a variational formulation according to the scheme which was described in the previous chapter.
For the beam with supported right end we set where H 2 ((0, L)) is the Sobolev space, the space of controls is chosen as the Lebesgue space and let the set of admissible controls be (for given bounds u min < u max ) Considering (3) we introduce the following bilinear forms and the linear form (for u ∈ U given) and according to (2) for a given u ∈ U ad we get the variational state equation As the assumptions of the Lax-Milgram theorem are fulfilled, we can claim the existence of exactly one solution y(u) ∈ V for any u ∈ U ad .
It remains only to specify the cost functional. Let . be L 2 -norm, let y d ∈ L 2 ((0, L)) be a given function. Then we set for some parameter β > 0.
Remark 3. The expression with β is a "convexification term". Note that generally the term 1 2 y − y d 2 is not necessarily convex since y depends on u.
The resulting problem of the form (OCP) includes the linear state equation (23) and the quadratic objective functional (24). Such problems are called the linear-quadratic elliptic control problems, see [1], [8]. The existence of a solution to this (OCP) follows from the previous section.
The functional (24) has some advantageous properties. First we have to transform it to its reduced form: where (., .) denotes the scalar product in L 2 ((0, L)).
We are able to prove convexity of I(u) on U . This fact and theorem 2.7 then imply immediately the next result: is necessary and sufficient criterion for u ∈ U ad to be an optimal solution of (OCP).
For a numerical solution it will be important to know the value ∇ u I(u). We can get it from the general formulas (10)-(12) substituting into them. We have to consider only the third optimality condition for the given problem which is obtained in the form Comparing (28) and (26) we get immediately 3.2. Numerical solution. As an analytic solution is not possible, we need to involve some numerical methods to solve our optimal control problem (OCP). Following books [1], [8] we can see several good possibilities: conditional gradient method, projected gradient method, active set method or interior-point method.
The considered problem is not too complex, therefore our choice will be gradient methods. Next we show briefly their scheme: 1. Define reduced functional I(u) by (25). 2. Solve state equation for y = Su and then determine p = S * (y − y d ).
3. Determine −∇ u I(u) = −(βu + p), the direction of steepest descent at u. Comparison of the both gradient methods favours somewhat the projected gradient method. The n-th iteration of its algorithm reads as follows: Step 1: For given u n determine y n = Su n (from state equation).
Step 2: For given y n determine p n = S * (y n − y d ) (from adjoint equation).
Step 3: Set new direction v n as v n = −(βu n + p n ).
Step 4: Determine new step size α n ∈ [0, 1] as a solution of Step 5: Set u n+1 = u n + α n v n . In the step 4 we have denoted P the projection on interval [u min , u max ]. Solution of (30) can use Armijo backtracking strategy for step-length evaluations.
Next step is to perform a discretization. To this end we have to determine some finitedimensional approximations of our function spaces. They will be formally as follows (h denotes a discretization parameter). Let us note that U h ad is a non-empty, closed, convex and bounded subset of U h , but we do not assume that U h ad ⊂ U ad . This cannot be generally guaranteed. But for our case we will arrange it this way.
Using (31) the state problem is replaced by its approximation: The functional J is replaced by J h : V h × U h → R, which is an approximation of the expression such that (cf. (5) and theorem 2.4) J h is lower semicontinuous on V h × U h , i.e.
We are ready to formulate an approximation of the optimization problem (OCP):

788
JITKA MACHALOVÁ AND HORYMÍR NETUKA Definition 3.1. By the approximation of (OCP) we will understand the problem After these formal formulations we proceed to the discretization by the FEM, i.e. Finite Element Method.
Let us define a division {x i }, are basis functions of U h such that ψ j (x k ) = δ jk . Then any function y h ∈ V h , u h ∈ U h can be written in the form and Here P 3 (K i ) denotes the set of cubic polynomials defined on an element which was generated by a division of [0, L] onto subintervals. Space U h will be approximated by means of piecewise linear polynomials Approximation of the cost functional J could be done by evaluation of integrals using trapezoidal rule or by transformation to quadratic function using approximations of y h and u h , details will be given later. Now we are ready to apply the Galerkin method which 1. substitutes y h and u h into state equation, 2. as test functions v h chooses successively ϕ 1 , . . . , ϕ N . This process results in algebraic formulation of our problem. The state equation (32) is transformed into a system of linear algebraic equations with u = (u j ) M j=1 ∈ R M as the given vector and y = (y i ) N i=1 ∈ R N as the unknown vector. In (41) we have denoted General algebraic representation of J h can be done as follows. Define two isomorphisms I 1 : I 1 y h = y, I 2 : Then the algebraic representation of J h reads as and this is a function of N + M variables. Similarly we can also proceed with an algebraic representation of U h Using the FEM we are able to identify the algebraic form of J h more specifically. Substituting expressions (36) into (33) we obtain where Therefore the algebraic representation of (OCP) h reads as follows where y = y(u) ∈ R N solves linear system Ky = f + Bu. (52) This problem can be easily identified as a quadratic programming problem.
4. Optimal control of nonlinear Gao beam. Next we will consider nonlinear beam model which was developed by D.Y. Gao in [3], [4].

4.1.
Semilinear-quadratic elliptic control problem. We are going to describe this model here only briefly. It is represented by the fourth-order nonlinear differential equation whereas boundary conditions will be the same as before, i.e. (15)-(16). The symbols used in (53) have the following meaning. Young's modulus E is considered as a constant value, area moment of inertia I = 2/3 h 3 is also a constant with h as the thickness measured from the x axis and the width of the beam considered as a unit. Deflection of the beam is denoted by y. As before P denotes negative axial force causing tension. The remaining symbols indicate the following where ν denotes the Poisson ratio and q means the given vertical load. From (53) we can see that in fact we are dealing with semilinear equation, i.e. linear in the highest-order derivative. Similar problems are considered in the monograph [8] and they are there called semilinear-quadratic elliptic control problems.
First we want to formulate the state problem. Classical formulation reads as but for the planned numerical solution the weak formulation will be more useful. Using the already established notation (22) and the symbol for nonlinear form we obtain the next formulation.    For given u ∈ U ad find function y ∈ V such that a(y, v) with V and L the same as above. In comparison with the Euler-Bernoulli beam this problem contains the nonlinear variational equation.
As we want to use essentially the same procedure like before, we have to verify the solvability of the state equation as now it is not linear problem. For this purpose we turn to variational formulation of this problem. The functional of total potential energy for the Gao beam is as follows (see e.g. [3]) and the variational formulation has the form It can be easily shown that for P < 0 we get strictly convex functional Π on V and hence the minimization problem (61) has a unique solution. After some rearrangements it can be proved that the equation in (59) characterizes this solution.
The final optimization problem (OCP) is formally the same as in (4) where y(u) solves the state problem (59).
The existence of a solution follows from results given in chapter 4 of the book [8]. Finally let us write down here the optimality conditions for this problem: EI y IV − Eα (y ) 2 y + P µy = f + u, (63) We followed the ideas from [1] or [8], but unfortunately we have no place for more details concerning its derivation.

Numerical solution.
Using the same FEM discretization as above, we can establish the algebraic formulation for our current problem.
where y = y(u) ∈ R N solves nonlinear system K(y) y = f + Bu. ( It is evident that we obtained a nonlinear programming problem. The main and only difference between this and the previous case with Euler-Bernoulli beam consists in the state equations. Formerly we had the linear system, but now we must work with the system of nonlinear equations K(y) y = f + Bu.
(67) Let us denote K L a linearization of the matrix K(y). Knowing a suitable construction of K L , we can easily solve the linear system instead of (67). In this paper we used the Reddy's formula   Conclusions. The standard linear beam model and the nonlinear one were here considered in relationship with the optimal control problem. We have concentrated on the computational aspects and our results were presented by two examples. It can be seen that the nonlinear beam is more tougher than the classical linear beam.
We omitted here the case with the axial force P > 0, because it is very complicated and we are still working on it.