REGULARIZED SOLUTION FOR A BIHARMONIC EQUATION WITH DISCRETE DATA

. In this work, we focus on the Cauchy problem for the biharmonic equation associated with random data. In general, the problem is severely ill- posed in the sense of Hadamard, i.e, the solution does not depend continuously on the data. To regularize the instable solution of the problem, we apply a nonparametric regression associated with the Fourier truncation method. Also we will present a convergence result.

1. Introduction. The biharmonic equation arises in modelling bending in plate theory (Chapter 5, [9]), the motion of fluids, free boundary problems and non-linear elasticity [1,5], and there are many papers on the Cauchy problem for the biharmonic equation (see for example [4,7]). For a history of the biharmonic problem and the relation with elasticity from an engineering viewpoint see the survey of Meleshko [8]. Since the biharmonic equation is a higher-order partial differential equation, some Cauchy data on all of the boundary of the solution domain is not enough to give a well-posed problem. If a part of the boundary is over-specified and the remaining part is under-specified then the problem is known as a Cauchy problem (see [12]). Such incomplete data imply that the biharmonic equation is an ill-posed problem. Although the well-posed biharmonic problem has attracted great interest and there are many papers on analysis and numerical methods on this topic the ill-posed biharmonic equation is still limited. In [10], necessary and sufficient conditions are provided for the existence of a solution to the Cauchy problem for the biharmonic equation and it was noted that the Cauchy problem for the biharmonic where Ω = (0, π). They proposed a modified nonlocal boundary value problem method. Convergence estimates for the regularized solution were obtained under a priori bound assumptions for the exact solution. A similar model was considered recently by Khanh et al [6]. To the best of the authors knowledge, inhomogeneous biharmonic problems have not been studied yet. Motivated by the above, in this paper, we consider the following inhomogeneous problem where ϕ, ψ, F are given. Our main purpose is to recover the data u(y, x) for any 0 ≤ y ≤ L from given data ϕ, ψ, F. In reality, the exact values of the function F, ϕ, ψ are not available. A small perturbation in the given Cauchy data F, ϕ, ψ may cause a very large error on the solution. There are two kinds of errors when we observe the data. The first kind of error is the deterministic error which comes from controllable sources. The second kind of error is the random error which is generated from uncontrollable sources, such as wind, rain, snow, humidity, radiation, etc. Methods used for the deterministic cases cannot be applied directly to the random case. Note because of the random noise, the calculation is often intractable. Unlike in [6], in this paper, we first consider the data from a different point of view in which the source F and the Cauchy data ϕ, ψ will be measured at a discrete set of points and contain random errors. Our problem (2) can be described by a random model as follows. Let x k = π(2k−1) 2n , k = 1, 2, ..., n be grid points (or called fixed design points) in Ω and the values of ϕ(x k ), ψ(x k ), F(y, x k ) are contaminated by the observations where A k , B k are mutually independent standard normal variables, ω k (y), y ∈ [0, L] are Brownian motions, 0 < ρ k ≤ V 1,max , 0 < σ k ≤ V 2,max , for k = 1, 2, ..., n. To the best of our knowledge, there are no results in the literature on the Cauchy problem for the biharmonic equation with random noise. This paper is organized as follows. The discretization form of Fourier coeffients is introduced in Section 2. Section 3 is devoted to the ill-posedness of the problem. In Section 4, we construct an estimator which is called the Fourier regularized solution and an upper bound of the estimation error is also described in this section.
Consider the Dirichlet-Laplacian Since A is a linear densely defined self-adjoint and positive definite elliptic operator on the open interval Ω with Dirichlet boundary condition, the eigenvalues of A are λ j = j 2 , j ∈ Z + . The corresponding eigenfunctions are denoted by e j = 2 π sin jx, j ∈ Z + . Thus, the eigenpairs (λ j , e j ) satisfy For any positive number τ, let H τ (Ω) be the set of all functions f ∈ L 2 (Ω) with where ·, · is the inner product in L 2 (Ω). Let X be a Banach space, and we introduce L ∞ (0, L; X), the Banach space of measurable real functions g : (0, L) → X with the following norm g L ∞ (0,L;X) = esssup y∈(0,L) g(y, ·) X < ∞.
Since (e j ) is an orthonormal basis of L 2 (Ω), the solution u has the expansion Multiplying both sides of the first equation in (2) by e j (x) and integrating over [0, π], we obtain By solving the problem (6), we can get the formula of the Fourier coefficients u j (y) as follows

TRAN NGOC THACH, NGUYEN HUY TUAN AND DONAL O'REGAN
Thus 2.2. Discretization form of Fourier coefficients. Our goal in this subsection is to recover the Fourier coefficients of the mild solution from the discrete data. To do this, we will represent ϕ j , ψ j , F j (y) by the measured values of ϕ, ψ, F(y, ·) at discrete points x k = π(2k−1)
We can see that the errors R I n;j , R II n;j and R III n;j (y) are not easy to handle. In the next Lemmas, we will represent them by high-frequency Fourier coefficients of the functions ϕ, ψ, F(y). Lemma 2.1 (see [3], page 145). Let s = 1, n − 1, r = 1, 2, 3... and x k = π(2k−1) otherwise.
If r = 1, n − 1, then From this Lemma, we have the next result: and Remark 2.1. Theorem 4.1 will show that the errors R I n;j , R II n;j and R III n;j (y) tend to zero as n → ∞ (see (31), (39)).
Proof. From (9), we havẽ This implies that Similarly, we can get discretization forms for the errors R II n;j , R III n;j and this completes the proof of Lemma 2.2.
Combining (8)-(12), we can obtain a data-explicit form for the solution of the problem: Theorem 2.1. Let N n ∈ Z + such that 0 < N n < n. Assume that ϕ, ψ, F are as in Lemma 2.2. Then where R I n;j , R II n;j , R III n;j are defined as in Lemma 2.2.
3. The ill-posedness in the case of random noise data. In this section, we investigate a concrete model of data and prove the instability of the solution in the case of random noise data, i.e., the solution does not depend continuously on the data. Let us consider a simple example in which ϕ(x) = ψ(x) = F(y, x) ≡ 0 and the random noise data are given as follows with k = 1, 2, ..., n. It should be noted that the observations above are discrete data.
Using the idea of the nonparametric regression method (see [11]), we establish the functions where ϕ n j , ψ n j , F n j (y) are constructed from the observed data Now, we are ready to state and prove the ill-posedness of the problem.
Lemma 3.1. Let u be the solution of (2) with respect to the data ϕ = ψ = F ≡ 0 and u be the solution of (2) with respect to the data ϕ n , ψ n , F n defined as in (14), i.e., u satisfies the problem Then, for any y ∈ [0, L].
Remark 3.1. From Lemma 3.1, we can see that small noises in the input data ϕ, ψ, F can make the corresponding solution have a large error. It shows that the solution u does not depend continuously on the data ϕ, ψ, F and the problem (2) is ill-posedness in the sense of Hadamard.
Proof. We first prove (17). Definitions (14)-(15) give us By replacing ∼ N (0, 1), k = 1, n, are independent normal random variables, we have E (A k A l ) = δ kl for all k, l, where δ kl is the Kronecker delta. Hence as n → ∞. By using similar techniques and noting that a property of Brownian motions ω k is E [ω k (y)ω l (y)] = δ kl y for any y ∈ [0, L], we get Thus (17) is proved. We now prove (18). Since ϕ = ψ = F ≡ 0, the unique solution of our problem is u ≡ 0. From the system (16), we can remark that u(y, x) is a trigonometric polynomial with order less than n (with respect to the variable x). Applying Theorem 2.1 with N n = n − 1, we get It follows that Using the inequality 2 Since cosh 2 r ≥ e 2r 4 , r ∈ R + , we have cosh λ n−1 (L − y) On the other hand Now since sinh 2 r ≤ e 2r , r ∈ R + , we have From Hölder's inequality we get Since cosh 2 r ≤ e 2r , r ∈ R + , we have These lead to Similarly, we can obtain Combining (20)-(24), we conclude that

4.1.
Estimator of the solution. Since the system (2) is ill-posedness, a regularization is required to regularize the instable solution. In order to obtain a stable approximate solution for this problem, we first establish estimators for the functional data ϕ, ψ, F by using the nonparametric regression method (see [11]): where N n ∈ Z + such that 0 < N n < n. Let v Nn,n be the solution of the problem It follows from formula (8) that cosh Fk instead of the functional data ϕ, ψ, F. Problem (2) with conditions (3)-(5) can have infinitely many solutions. Therefore, we discretize the problem and construct the finite system as in (25) to regularize the present problem. Here, we call v Nn,n a regularization solution and N n is called a regularization parameter.

Convergence result.
In this subsection, we state and prove our main result, which shows the convergence rate of the regularization solution.
Before proving Theorem 4.1, we give an auxiliary lemma. From the formulas of the mild solution u and the regularization solution v Nn,n , we obtain the following lemma: Lemma 4.1. Let N n ∈ Z + such that 0 < N n < n. Assume that problem (2) has a unique solution u ∈ C ([0, L]; H τ (Ω)). Then v Nn,n (y, ·) − u(y, ·) where R I n;j , R II n;j , R III n;j are defined as in Lemma 2.2.
Proof. This result can be proved by using Theorem 2.1, formula (26) and Parseval identity.
We now return to the proof of Theorem 4.1.
Proof of Theorem 4.1. Using inequality we have from (28) that v Nn,n (y, ·) − u(y, ·) 2 L 2 (Ω) ≤ 8 Nn j=1 S I n;j + S II n;j + S III n;j + S IV n;j + j>Nn u 2 j (y), where S I n;j := cosh , and S IV n;j := Next, we will find upper bounds for ES I n;j , ES II n;j , ES III n;j , ES IV n;j . Since A k i.i.d ∼ N (0, 1), k = 1, n, are independent normal random variables, we have E (A k A l ) = δ kl for all k, l. This leads to the following which is needed to estimate ES I n;j : The assumption ϕ ∈ H τ (Ω), τ > 1, implies that In addition, we have the following inequality by using cosh 2 r ≤ e 2r , r ∈ R + , Combining (30)-(32) and putting µ(τ) := 2 ∞ l=1 1 l τ , we obtain a bound for ES I n;j , Similarly, we can get Next, we estimate ES III n;j . We first have the following inequality by using Hölder's inequality We now give a bound for the first term as follows: On the other hand, since ω k (ξ), k = 1, n, are Brownian motions we get From (35)-(37), we obtain We next find an upper bound for S III,2 n;j :=

353
The assumption F ∈ L ∞ (0, L, H τ (Ω)) gives us Also it follows that We deduce from (38) and (40) that Now, let us estimate ES IV n;j and the last term of (29). It is easy to see that An upper bound for ES IV n;j can be found by a similar manner as above: Since u ∈ C ([0, L]; H τ (Ω)), the last term is bounded as follows Finally, we can conclude from (33)-(34) and (41)-(43) that This completes the proof of Theorem 4.1.
From our main result, we give a remark showing the order of the mean square error E v Nn,n (y, ·) − u(y, ·) For ς ∈ 0, 1 2 , if we choose N n = 1 2L ln n ς , where r is integer number which is a truncation of r ∈ R, then E v Nn,n (y, ·) − u(y, ·) 2 L 2 (Ω) is of order max n 2ς−1 , ln −2 (n ς ) .
For the discretization, we set a uniform grid of mesh point (y q , x r ) as a discretion of the space and time intervals for q = 1, N y + 1 , r = 1, N x + 1 satisfying ∆x = π N x , ∆y = 1 N y , x q = (q − 1)∆x, y r = (r − 1)∆y.
Choose N n ∈ Z + such that 0 < N n < n, and the solution of the stable approximate problem (25)-(26) is given by v Nn,n (y, x r ) − u(y, x r ) 2 .
We will now show the results of our regularized method by the following In Table 1, we show the estimate errors between the exact solution and the regularized solution for n ∈ {20, 50, 100} at y = 0.1, 0.2, 0.3, respectively. From Figures 2(a), 3-(a) and 4-(a), we see there are good approximations. We also present the errors in Figures 2(b), 3-(b) and 4-(b) on the horizontal axis x ∈ [0, π]. Figure  5 indicates a 3D-graph of the exact and regularized solutions.