On eigenelements sensitivity for compact self-adjoint operators and applications

In this manuscript, we present optimal sensitivity results of eigenvalues and eigenspaces with respect to 
self-adjoint compact operators. We show that while eigenvalues depend in a Lipschitzian way in compact operators, the 
eigenspaces are only locally Lipschitz. Our results generalize to arbitrary dimension eigenspaces the results obtained in [19] 
for one-dimensional eigenspaces sensitivity and thus simplify the celebrate results by Davis and Kahan [6] developed 
for general Hermitian operator perturbations. Moreover, Proper Orthogonal Decomposition bases sensitivity 
is carried out in the case of time-interval perturbations, spatial perturbations (Gappy-POD) or parameter perturbations.


1.
Introduction. The main objective in proper orthogonal decomposition (POD) is to provide an optimal low dimensional basis, in a sense which will be described below, to represent high-dimensional experimental or numerical data. This lowdimensional basis allows the formulation of reduced-order models of complex flows. POD decomposes a given space-time dependent flow field u(x, t) into an orthonormal system of spatial modes (Φ n (x)) n∈N and their corresponding temporal amplitudes (a n (t)) n∈N as the following u(x, t) = n≥0 a n (t) Φ n (x).
The classical mathematical framework for POD is the space L 2 ([0, T ]; H), where [0, T ] and H denote respectively a time-interval and a Hilbert space of spatial functions. The optimality of the basis (Φ n (x)) n∈N means that any truncated series expansion N n=0 a n (t) Φ n (x) of u(x, t) in this basis has the smallest mean-square truncation error among all the other bases in the sense of L 2 ([0, T ]; H) norm.
Using the spectral theory of compact self-adjoint operators, the POD provides a natural ordering of the spatial modes with respect to the mean-square of their amplitude.
In various problems in fluid mechanics or fluid structure interaction, the POD basis is computed for a given Reynolds number or for other flow parameters. This basis is then used to build a reduced model to predict the flow for other values of these parameters. The question concerning the domain of validity of this reduced model arises then naturally. In Akkari et al. [3], for quasilinear parabolic partial differential equation, this domain of validity was specified according to the solution regularity and number of modes considered in the reduced model. Similar results was obtained for Navier-Stokes equations [1] and for Burgers equations [2]. Numerical studies on parametric sensitivity for fluid structure interaction problems was performed in [17]. This methods are also used in control theory [14,20,12], in medical imaging [7,18] one where one often seeks to use the POD basis to represent data although time-interval variation, spatial domain variation or parameter variation may occur.
The aim of the present manuscript is the relevant question concerning the sensitivity of POD bases with respect to such variations. In the seminal work of Davis and Kahane [6], the authors gave a general answer about the rotation of eigenspaces of Hermitian operators with respect to Hermitian perturbations. The theory developed in [6] were simplified in the case of compact and self-adjoint operators by Rousselet and Chenais [19], but only the sensitivity of one-dimensional eigenspaces with respect to compact variations were studied. In the present work, we extend the results of [19] to multi-dimensional eigenspaces and place the POD sensitivity with respect to time variation, space variation or parameter variation in the general framework of compact operator perturbations.

Preliminaries and notations.
Let H be a Hilbert space, endowed with its inner product · , · H and its norm · H . Consider the space of self-adjoint compact operators K s (H). Recall that the spectrum of any operator K ∈ K s (H) consists of isolated real eigenvalues forming a sequence converging towards 0. Moreover, 0 can be itself an eigenvalue if K is not injective.
• For any u ∈ S, we set d(u, S(µ n )) = min{ u − v H : v ∈ S(µ n )}, the "distance" from the point u to the set S(µ n ). • For any closed subspace E of H, the orthogonal projection on E is denoted by E .
The Courant-Fisher Theorem provides the following characterization of all eigenvalues: where G n−1 (H) is the Grassmann analytical manifold of all (n − 1)−dimensional subspaces of H, and V ⊥ denotes the orthogonal, in H, of the subspace V . This min-max value is achieved for V = E(µ + 1 ) + E(µ + 2 ) + · · · + E(µ + n−1 ) and any vector v ∈ S(µ + n ). In the same way, This max-min value is achieved for V = E(µ − 1 ) + E(µ − 2 ) + · · · + E(µ − n−1 ) and any vector v ∈ S(µ − n ), moreover where · denotes the norm in L(H), the space of linear continuous operators on H.
3. Sensitivity of eigenvalues and eigenspaces on K s (H). Let us introduce, for every n ∈ N * , the function Proof. For any K 1 , K 2 in K s (H) and u ∈ S, the inequality holds true. It follows that Applying the min-max characterization (1), we obtain: The same arguments can be used for the negative eigenvalues µ − n (K i ), i = 1, 2, with the max-min characterization (2), which achieves the proof.
In what follows, we will study the regularity of eigenspaces with respect to selfadjoint compact operators. Before announcing our main result, let us dwell a while on the notion of locally Lipschitz dependence for eigenspaces in K s (H).
Let K ∈ K s (H) and v ∈ S. First, remark that for every n ∈ N * , one has Remark 1. The following equivalences hold true: We show now a coerciveness result of the operator (K − µ n (K) id H ) on E(µ n (K)) ⊥ , for any K ∈ K s (H) and n ∈ N * . Lemma 3.2. Let K ∈ K s (H) and n ∈ N * . Then there is C(K, n) > 0 such that Proof. Consider the subspaces: • E 0 n (K) = Ker(K), the kernel of the operator K. • E + n (K) is the direct sum of all eigenspaces of K whose eigenvalues are larger than µ n (K).
is the direct sum of all eigenspaces of K whose eigenvalues are lower than µ n (K).
Since the operator K is compact and self-adjoint, the space E(µ n (K)) ⊥ can be decomposed as the following: Therefore, the claim follows by setting Remark 2. The constant C(K, n) defined by (6) satisfies: for every operator K ∈ K s (H).
At this stage, we can state and show our main result. We mention here that our result is a generalization of a result obtained by Rousselet and Chenais [19], where only the special case of one-dimensional eigenspaces is considered. Our work provides a direct and simple proof of the celebrate result obtained by Davis and Kahane [6] on the rotation of eigenvectors by perturbations for more general Hermitian operators.
Theorem 3.3. Let K ∈ K s (H) and n ∈ N * . Given δK ∈ K s (H), then holds true for any v ∈ S(µ n (K + δK)), where the constant C(K, n) is given by (6).
Proof. Let K, δK in K s (H) and n ∈ N * and consider an arbitrary v ∈ S(µ n (K + δK)). From Remark 1, we can distinguish three situations: ) and we get hence the result.
then by classical geometry arguments, Since v = u + (v − u), on has on the other hand Substituting the last identity in (9), we get Using the facts that v −u = d(v, S(µ n (K))) ≤ √ 2, and thus 1 − 1 Moreover, using the coerciveness inequality (5), it holds But from (8) it follows: To lighten the notation, we set δµ n = µ n (K + δK) − µ n (K), then a direct computation gives and therefore by Lemma 3.1 Finally, we conclude that where C(K, n) is given by (6), which provides they result.
. The situation is similar to Case 2 with α = −1. The conclusion is then straightforward, which achieves the proof.
To simplify the statement of the following corollary, we will assume that all operators are nonnegative, which implies that all their eigenvalues are nonnegative. Corollary 1. Let K 1 and K 2 be two nonnegative operators in K s (H) and N ∈ N * . Let (Φ n (K j )) 1≤n≤N ∈ S be eigenvectors of K j , j = 1, 2, corresponding to the eigenvalues µ 1 (K j ) ≥ µ 2 (K j ) ≥ · · · ≥ µ N (K j ) > 0. Define V N (K j ) the subspace of H spanned by the eigenvectors (Φ n (K j )) 1≤n≤N . Then
which ends the proof.
We give here a small exemple to illustrate this well-known phenomenon. Consider the special case where H = R 2 , where ε is a small real parameter. It is clear that K 2 − K 1 = |ε| √ 3 1. The eigenvalues of K 1 and K 2 are respectively µ 1 (K 1 ) = 1 + ε, µ 2 (K 1 ) = 1 − ε, µ 1 (K 2 ) = 1 + 2ε, and µ 2 (K 2 ) = 1 − 2ε, whose variations satisfy |µ i (K 2 ) − µ i (K 1 )| = |ε| 1, for i = 1, 2. However, the eigenvectors of K 1 and K 2 are respectively 1. This leads to the introduction of the clustering value of an operator K 1 with respect to an operator variation δK := K 2 − K 1 . sorted in the decreasing order µ 1 ≥ µ 2 ≥ · · · ≥ µ n ≥ · · · > 0. Of course, 0 can be itself an eigenvalue if K λ,T ,Ω is not injective. At this stage, we can consider spatial domain variations (Gappy-POD), timeinterval variations or λ-parameter variations and the question is to what extent a fixed POD basis may represent the solution with respect to such variations. For the reader's convenience, let us describe these three situations and the corresponding functional frameworks.
But as it was pointed above, we have which means that when we consider subspaces of POD eigenvectors corresponding to clustered eigenvalues, the approximation of solutions is sensitive to small parameter perturbations. In what follows, we will present other situations where clustering sensitivity can occur. More precisely, we will see that the Gappy POD method or extension of time intervals leads to perturbations of compact operators and the phenomenon of clustered eigenvalues may hold true.
4.2. Spatial domain variation: Gappy POD. The gappy POD method uses a POD basis to reconstruct missing or "gappy" data. This method was introduced by Everson and Sirovich [8] and can be described as follows. Let (Φ λ,T,Ω n ) n≥1 be a POD basis corresponding to a solution u λ,T ,Ω . Let ω be a "small" subset of Ω and u λ,T ,Ω\ω be the restriction of u λ,T ,Ω to Ω \ ω. The aim of the Gappy POD method is to reconstruct u λ,T ,Ω\ω on the missing subset ω by the direct use of the basis (Φ λ,T,Ω n ) n≥1 . To this end, let us write the "repaired" solution on all Ω from the incomplete data u λ,T ,Ω\ω by u λ,T ,Ω := n≥1 a n (t)Φ λ,T,Ω n , where the unknown temporal coefficients ( a n (t)) n≥1 minimize the error term u λ,T ,Ω − u λ,T ,Ω\ω to Ω \ ω.
We will see that the Gappy POD procedure can be placed in the framework of spatial basis perturbation. Indeed, let K 1 := K λ,T ,Ω and K 2 := K λ,T ,Ω\ω the correlation compact, self-adjoint and nonnegative operators, associated to u λ,T ,Ω and u λ,T ,Ω\ω respectively. Then δK ω := K 1 − K 2 is defined by where u λ,T ,ω is the restriction of u λ,T ,Ω to ω. It is clear that δK ω ∈ K s (H(Ω)) and due to the Lebesgue's Dominated Convergence Theorem, we get easily lim |ω|→0 δK ω = 0 in L(H(Ω)), where |ω| denotes the measure of ω. Thus, Gappy-POD method can be seen as a spatial perturbation and the corresponding POD bases as perturbed eigenvectors of compacts self-adjoint operators. Whence, similar sensitivity results may occur in the presence of clustered eigenvalues for the operator K λ,T ,Ω .

Time interval variation.
Assume that the solution u λ,T 1 ,Ω ∈ L 2 (0, T 1 ; H(Ω)) is known on the time interval [0, T 1 ] Let (Φ λ1,T1,Ω n ) n≥1 be the POD basis associated to u λ 1 ,T 1 ,Ω , via the compact, self-adjoint and nonnegative operator Thus, time interval perturbation induces compact operator perturbation and thus POD bases perturbations. Therefore, similar sensitivity results are expected in the presence of clustered eigenvalues for the operator K λ,T 1 ,Ω .