STABILITY ESTIMATES IN TENSOR TOMOGRAPHY

. We study the X-ray transform I of symmetric tensor ﬁelds on a smooth convex bounded domain Ω ⊂ R n . The main result is the stability estimate (cid:107) s f (cid:107) L 2 ≤ C (cid:107) If (cid:107) H 1 / 2 , where s f is the solenoidal part of the tensor ﬁeld f . The proof is based on a comparison of the Dirichlet integrals for the exterior and interior Dirichlet problems and on a generalization of the Korn inequality to symmetric tensor ﬁelds of arbitrary rank.

where dx is the (n−1)-dimensional Lebesgue measure on ξ ⊥ = {x ∈ R n | x, ξ = 0}. This is the standard Fourier transform in the (n−1)-dimensional variable x, where ξ stands as a parameter. The space H k (T S n−1 ) is defined for k ∈ R as the completion of S(T S n−1 ) with respect to the norm where dξ is the (n − 1)-dimensional volume form on S n−1 induced by the Euclidean metric of R n .
Let Ω ⊂ R n (n ≥ 2) be an open strictly convex bounded domain with a smooth boundary ∂Ω, and let Ω be the closure of the domain. We use the term "smooth" as the synonym of "C ∞ -smooth". Strict convexity means that the second quadratic form of the boundary is positively definite at every point p ∈ ∂Ω. The X-ray transform is the linear bounded operator (1) I : C(Ω; S m R n ) → C(T S n−1 ) defined as follows: for f = (f i1...im ) ∈ C(Ω; S m R n ), where f is assumed to be extended by zero outside Ω. The Einstein summation rule is used in (2) as well as in further formulas: the summation from 1 to n is assumed over every index repeated in lower and upper positions in a monomial. The integration is actually performed in finite limits because supp(f ) ⊂ Ω.
In the case of m = 0 (when f is a function), the X-ray transform is the main mathematical tool of Computer Tomography. In the case of m = 1 (when f is a vector field), the operator I is called the Doppler transform and serves as the main mathematical tool of Doppler Tomography. In the cases of m = 2 and of m = 4, the operator I and some of its modifications are applied to various problems of tomography of anisotropic media, see [14,Chapters 6,7] and [8,17].
From the mathematical viewpoint, the main difference between scalar and tensor tomography consists of the following: the operator I has a big null-space in the case of m > 0. A tensor field f ∈ C(Ω; S m R n ) is called a potential field if f = dv for some v ∈ C 1 (Ω; S m−1 R n ) satisfying the boundary condition v| ∂Ω = 0. Any potential tensor field f satisfies If = 0 as is shown in the next section. Given If , we can recover the solenoidal part s f of a tensor field f only.
The following statement is the main result of the present paper.
The solenoidal part s f of a tensor field f ∈ L 2 (Ω; S m R n ) can be uniquely recovered from If , and the stability estimate is valid with a constant C depending on Ω and m but independent of f .
The first two statements of the theorem (the boundedness of the operator (3) and the implication (If = 0) ⇒ ( s f = 0)) are not new, but the estimate (4) is new. More precisely, the estimate was known before only in the scalar case of m = 0 [10, Chapter II, Theorem 5.1].
Our Main Theorem gives a partial answer to an important question that remains open in the general case. Let us briefly discuss the general setting.
A compact Riemannian manifold (M, g) with boundary is called a non-trapping manifold if (a) the boundary is strictly convex with respect to the metric g and (b) every geodesic γ x,ξ (t) starting at a point x ∈ M in direction 0 = ξ ∈ T x M reaches the boundary in a finite time τ (x, ξ). For such a manifold, the family of maximal geodesics is parameterized by points of the manifold where ν(x) is the unit outer normal vector to the boundary. The space of smooth symmetric covariant tensor fields is denoted by C ∞ (M ; S m τ M ) (the space of smooth sections of the mth symmetric power of the cotangent bundle). The geodesic X-ray transform is defined by This coincides with (2) in the case of the Euclidean metric. The integration limit τ (x, ξ) is a smooth function on ∂ − SM as is proved in [14,Section 4], where the term "dissipative manifold" is used instead of "non-trapping manifold". Therefore The following statement is proved in [12], see also [14,Theorem 4.3.3].
Theorem 1.1. Let (M, g) be a non-trapping Riemannian manifold of non-positive sectional curvature. For every integer m ≥ 0, the operator (5) uniquely extends to the linear bounded operator The solenoidal part s f of a tensor field f ∈ H 1 (M ; S m τ M ) can be uniquely recovered from If and the stability estimate Let us give two remarks on the theorem. 1. The second term on the right-hand side of (8) appeared due to the method of the proof used in [12]. The following question remains open: Can the second term on the right-hand side of (8) be omitted? If not, then the problem of recovering s f from If is conditionally stable in the following sense: for stable recovery, we need an a priori estimate of f H 1 . The factor m is written in front of the second term to emphasize that stable inversion is possible in the case of m = 0. Our Main Theorem states that the second term can be omitted in the case of the Euclidean metric, i.e., stable inversion is possible in the latter case. The question of the validity of the analogue of (4) remains open in the general case. (4) and (8), we see that, most probably, the term If 2 H 1 on the right-hand side of (8) can be replaced with If 2 H 1/2 . This was not done because Sobolev spaces H k with non-integer k were not used in [12]. Quite similarly, the factor f H 1 can be replaced with f H 1/2 .

Comparing
A different stability estimate for the geodesic X-ray transform of second rank symmetric tensor fields is obtained in [18]. Namely, The paper is organized as follows. Section 2 contains preliminaries. After basic definitions, we present an analogue of the estimate (4) for tensor fields on the whole of R n . The latter estimate has been known before, we will use it in our proof of the Main Theorem. The section is finished by Proposition 2.5 that reduces the Main Theorem to the case of a smooth solenoidal field. The reduction is not new, see [14,Lemma 4.3.4], and is presented for the sake of completeness.
In Section 3, we extend a solenoidal tensor field f ∈ C ∞ (Ω; S m R n ) by zero outside Ω. For the extension f • ∈ L 2 (R n ; S m R n ), we consider the decomposition to the solenoidal and potential parts: f • = s f • + dw, δ s f • = 0. Our proof of the Main Theorem depends in an essential way on the fact that the potential w is continuous on R n .
In Section 4, we recall the Dirichlet principle for the operator δd and formulate Theorem 4.2 that reduces the Main Theorem to the problem of comparing the Dirichlet integrals for solutions to the interior and exterior Dirichlet problems with the same boundary condition.
Theorem 4.2 is proved in Section 5 with the help of Theorem 5.4 (the Korn inequality for symmetric tensor fields of arbitrary rank). Then the scheme of the proof of Theorem 5.4 is presented by an analogy with the proof of the classic Korn inequality.

Preliminaries.
For an integer m ≥ 0, let S m R n be the complex vector space of all symmetric R-multilinear maps (9) f : R n × · · · × R n m → C.
The dimension of S m R n is n+m−1 m . Its elements are called rank m symmetric tensors on R n . For such a tensor f ∈ S m R n , the complex-valued form f (ξ 1 , . . . , ξ m ) is well defined for vectors ξ i ∈ R n (1 ≤ i ≤ m); the form depends linearly on each argument and is symmetric with respect to any permutation of arguments. In particular, S 0 R n = C. It is convenient to agree that S m R n = 0 for m < 0. Now, we briefly discuss the coordinate representation of symmetric tensors. We use Cartesian coordinates on R n only. Let (e 1 , . . . , e n ) be the coordinate basis of R n . Every vector ξ ∈ R n is uniquely written as ξ = ξ i e i . Then, for f ∈ S m R n , ..im of the sum are called the coordinates (or components) of the tensor f with respect to the given coordinate system. The coefficients are symmetric in all indices (i 1 , . . . , i m ). To adjust our notations to the Einstein summation rule, we use both lower and upper indices for denoting coordinates of vectors and tensors. There is no difference between co-and contravariant tensors because only Cartesian coordinates are used.
The dot product on S m R n is defined by (10) f The corresponding norm is denoted by |f |. In particular, S 1 R n is identified with C n by f (ξ) = f, ξ for f ∈ C n and ξ ∈ R n . We will need some algebraic operations on symmetric tensors. Let ⊗ m R n be the space of all rank m tensors on R n , i.e., the space of R-multilinear maps f : R n × · · · × R n m → C.
Unlike (9), symmetry is not required here. There is the canonical projection (symmetrization) σ : ⊗ m R n → S m R n defined by where the summation is performed over the set Π m of all permutations of the set {1, . . . , m}.
Let us recall that, for f ∈ ⊗ k R n and h ∈ ⊗ m R n , the tensor product f ⊗ h ∈ ⊗ k+m R n is defined by Now, for f ∈ S k R n and h ∈ S m R n , the symmetric tensor product f h ∈ S k+m R n is defined by f h = σ(f ⊗ h). Being furnished with this product, SR n = ∞ m=0 S m R n becomes a commutative graded algebra, the algebra of symmetric tensors, which is actually isomorphic to the algebra of polynomials in n variables.
For a fixed tensor f ∈ S k R n , we denote by i f : S m R n → S m+k R n the operator of symmetric multiplication by f , i.e., i f (u) = σ(f ⊗ u). The adjoint of i f is the operator j f : S m+k R n → S m R n of contraction with f which is written in coordinates as (j f u) i1...im = f j1...j k u j1...j k i1...im . In particular, the operator j y : S m R n → S m−1 R n (y ∈ R n ) will participate in many formulas below. Now, we briefly discuss symmetric tensor fields. Roughly speaking, a rank m symmetric tensor field f on a domain Ω ⊂ R n is a function f : Ω → S m R n , Ω x → f (x) = f x ∈ S m R n . More precisely, let Ω ⊂ R n be an open domain and let F(Ω) be a functional space of functions on the domain. By F(Ω; S m R n ), we denote the space of all maps f : Ω → S m R n such that all coordinate functions f i1...im (x) belong to F(Ω). For most of the spaces F(Ω), the resulting space F(Ω; S m R n ) is independent of the choice of Cartesian coordinates. In particular, the following spaces are defined by this scheme.
The space C k (Ω; S m R n ) (0 ≤ k ≤ ∞) of rank m symmetric tensor fields whose partial derivatives of order ≤ k are continuous in Ω.
For k ∈ R the space H k (Ω; S m R n ) consists of the set of functions or distributions in Ω that can be extended to an elementũ ∈ H k (R n ; S m R n ), and the norm u H k (Ω) is defined as inf{ ũ H k (R n ) |ũ = u in Ω}; for the domains Ω that we consider this definition agrees with other definitions. In particular, The space D (Ω; S m R n ) of tensor field distributions on Ω. The space S(R n ; S m R n ) of smooth fast decaying symmetric tensor fields on R n . The space S (R n ; S m R n ) of tempered tensor field distributions on R n .
The space E (R n ; S m R n ) of compactly supported tensor field distributions on R n .
Each of these spaces is furnished with the corresponding topology. The L 2product on L 2 (Ω; S m R n ) is defined by Now, a couple of words on the coordinate representation of symmetric tensor fields. Let (x 1 , . . . , x n ) = (x 1 , . . . , x n ) be a Cartesian coordinate system on R n with the coordinate basis (e 1 , . . . , e n ). A tensor field f ∈ C ∞ (Ω; S m R n ) can be considered as a family of smooth functions f i1...im (x 1 , . . . , x n ) on Ω which are symmetric in all indices. We write this as f = (f i1...im ) assuming the choice of coordinates to be clear from the context. In particular, f ∈ C ∞ (Ω; S 0 R n ) is just a smooth function on Ω. Since S 1 R n is identified with C n , a first rank tensor field f ∈ C ∞ (Ω; S 1 R n ) can be identified with the vector field f = f i e i as well as with the one-form f = f i dx i . Now, we introduce two important first order differential operators. The inner derivative d : where σ is the symmetrization in all indices. In the case of m = 0, df = ∂f ∂x i dx i is the differential of the function f . In the case of m = 1, (11) and (13) are actually valid in curvilinear coordinates too, but partial derivatives must be replaced with covariant derivatives in the latter case.
The operators d and −δ are formally adjoint to each other with respect to the above-introduced L 2 -product. Moreover, there is a natural Green's formula for these operators. Let us reproduce a special case of [14, Theorem 3.3.1].
Proposition 2.1. Let Ω ⊂ R n be a C 1 -smooth bounded domain. For tensor fields u ∈ C 1 (Ω; S m R n ) and v ∈ C 1 (Ω; S m+1 R n ), the following Green's formula is valid: where ν is the unit outward normal vector to the boundary and dS is the (n − 1)dimensional volume form on ∂Ω induced by the Euclidean metric.
The classical Helmholtz decomposition of a vector field can be generalized to arbitrary rank symmetric tensor fields as follows.
Let Ω ⊂ R n be a smooth bounded domain and 0 ≤ k ∈ R. Any tensor field f ∈ H k (Ω; S m R n ) (m ≥ 0) can be uniquely represented in the form The summands s f and dv of the decomposition are called the solenoidal and potential parts of the field f respectively. Proposition 2.2 is proved in [14, Theorem 3.3.2] for an integer k ≥ 0 in the more general setting of symmetric tensor fields on a Riemannian manifold. The case of a general k ≥ 0 follows by interpolation. Now, we briefly discuss symmetric tensor fields on the whole of R n . The Fourier We will need the following analogue of Proposition 2.2 which is a special case of [15,Theorem 3.5].
Proposition 2.3. Any tensor field f ∈ L 2 (R n ; S m R n ) (m ≥ 0) can be uniquely represented in the form The estimates The X-ray transform is defined by the same formula (2). Being initially defined on the Schwartz space, the X-ray transform then extends to wider spaces of symmetric tensor fields. First of all we observe that integral (2) converges in the classical sense if a field f ∈ C(R n ; S m R n ) decays at infinity so that |f ( The most important feature of the X-ray transform is the presence of a big nullspace in the case of m > 0. If a tensor field v ∈ C 1 (R n ; S m−1 R n ) decays at infinity so that v(x) → 0 as |x| → ∞, then I(dv) = 0. Indeed, by the definition of d, Using this identity, we derive In other words, the X-ray transform vanishes on potential tensor fields. Therefore, given If , we can hope to recover the solenoidal part of the field f only. Let The following statement is a special case of [15,Corollary 4.3].
Proposition 2.4. The operator (17) uniquely extends to a bounded operator Under the hypotheses of Proposition 2.4, there exists a generalization of (18) to an estimate of s f H k (R n ;S m R n ) for any k ∈ R and even of a more general norm s f H k t (R n ;S m R n ) (t > −n/2), see [15]. At first sight, our Main Theorem and Proposition 2.4 are very similar. However, the proof of Proposition 2.4 is much easier than that of the Main Theorem because the Fourier transform works more directly for tensor fields on the whole of R n than for tensor fields on a bounded domain. Proposition 2.4 will be used in our proof of the Main Theorem. Now, we reduce the Main Theorem to the following special case.

Proposition 2.5.
Let Ω ⊂ R n be a strictly convex bounded domain with smooth boundary. Let f ∈ C ∞ (Ω; S m R n ) (m ≥ 0) be a solenoidal tensor field, i.e., The stability estimate holds with a constant C independent of f .
Proof of the Main Theorem. We first prove the boundedness of the operator (3). Given f ∈ L 2 (Ω; S m R n ), let f • ∈ L 2 (R n ; S m R n ) be the extension of f by zero outside Ω. Then If = If • . By Proposition 2.4, the estimate (21) holds with a constant C independent of f . If Ω ⊂ {x ∈ R n | |x| ≤ A} then If is supported in the compact set {(x, ξ) ∈ T S n−1 | |x| ≤ A}. By [15, Lemma 2.4], the norms · H 1/2 1/2 (T S n−1 ) and · H 1/2 (T S n−1 ) are equivalent on the space of functions supported in a fixed compact set. Therefore (21) is equivalent to the inequality which means that the operator (3) is bounded.
Next, we prove estimate (4). There are two differences in hypotheses of Proposition 2.5 as compared with that of the Main Theorem: (1) f is assumed to be a solenoidal field and (2) f is assumed to be smooth.
We first consider the case of a smooth tensor field f ∈ C ∞ (Ω; S m R n ). Let s f ∈ C ∞ (Ω; S m R n ) be the solenoidal part of f . Applying (20) to s f , we obtain estimate (4) for f . Main Theorem is thus proved in the case of a smooth field f . Next, we approximate a tensor field f ∈ L 2 (Ω; S m R n ) by smooth tensor fields, i.e., choose a sequence f k ∈ C ∞ (Ω; S m R n ) (k = 1, 2, . . . ) such that Then, by Proposition 2.2, and, by the first statement of the Main Theorem, Writing down inequality (4) for f k and passing to the limit as k → ∞, we obtain (4) for f . (23) where s f • ∈ L 2 (R n ; S m R n ), w ∈ H 1 1 (R n ; S m−1 R n ) and dw ∈ L 2 (R n ; S m−1 R n ). Theorem 3.1. In the setting (22)-(23), the potential w is continuous on R n and satisfies |w(x)| → 0 as |x| → ∞.
Observe that δf • is a tensor field distribution supported in ∂Ω. It can be easily found [16, Lemma 5.2]: where the distribution dS ∈ E (R n ) is defined by dS | ϕ = ∂Ω ϕ(x) dS for a test function ϕ ∈ C ∞ (R n ) (we use the brackets · | · to denote the action of a distribution on a test function) and j ν f ∈ C ∞ (R n ; S m−1 R n ) is an arbitrary smooth extension of the tensor field j ν f ∈ C ∞ (∂Ω; S m−1 R n ) (ν being the unit outer normal vector to ∂Ω); the right-hand side of (24) is independent of the choice of such extension. Observe also that i.e., every component of the tensor field j ν f integrates to zero over ∂Ω. This easily follows from (22) with the help of the divergence theorem.
Applying the operator δ in distribution sense to (23) we obtain The next lemma lists a number of properties of so-called simple layer potentials that are well known from potential theory. We present the proof because we do not know a paper where all statement of the lemma are presented together.
Lemma 3.2. Let h ∈ E (R n ) (n ≥ 2) be a continuous density on a smooth compact hypersurface S ⊂ R n . Assume that h has mean zero, in other words, h | ϕ = 0 for all ϕ ∈ C ∞ (R n ) that are constant in some neighborhood of S. Let E ∈ L 1 loc (R n ) and assume that its Fourier transformÊ(y) is smooth outside the origin and (positively) homogeneous of degree −2. Then the convolution E * h is continuous everywhere in R n and E * h(x) tends to zero as |x| tends to infinity.
Proof. Note first that if n ≥ 3, thenÊ(y) is locally integrable and defines a homogeneous distribution in S (R n ), hence E(x) must be smooth outside the origin and homogeneous of degree 2 − n. If n = 2, thenÊ(y) is not a homogeneous distribution in R 2 unless the mean |y|=1Ê (y) dS is equal to zero (see [5,Section 3.2]). But the assumption implies that E(x) must satisfy an estimate One way to see this is to choose a constant c so that F (y) =Ê(y) − c|y| −2 satisfies |y|=1 F (y) dS = 0, which implies that F defines a homogeneous distribution in R 2 and hence that the inverse Fourier transform of F must be homogeneous of degree zero and smooth outside the origin and therefore bounded. And the inverse Fourier transform of |y| −2 is of course known to be equal to a constant times log |x|, the fundamental solution of the Laplace operator in dimension 2. We next prove that E * h is continuous. Since E is smooth outside the origin, it is clear that f = E * h is smooth in the complement of the hypersurface S. Therefore it is enough to prove that f is continuous at an arbitrary point x 0 ∈ S. Write E = E 0 + E 1 , where E 0 is compactly supported and E 1 is smooth everywhere. Then E 1 * h ∈ C ∞ , so it is enough to prove that E 0 * h is continuous at x 0 . Let V be a neighborhood of x 0 ∈ S. Decompose h, h = h 0 + h 1 , such that h 0 is supported in V , h 1 vanishes in some smaller neighborhood of x 0 , and both h 0 and h 1 are continuous densities on S. Then E 0 * h 1 is smooth in a neighborhood of x 0 , so it is enough to prove that E 0 * h 0 is continuous at x 0 . If V is sufficiently small we can choose coordinates, writing x = (x , x n ), so that S can be written x n = ψ(x ) near x 0 . Then the distribution h 0 can be written for some compactly supported continuous function g(x ). This gives To see that the integral exists observe that |E 0 (x − y , x n − ψ(y ))| ≤ C|x − y | 2−n if n ≥ 3 and |E 0 (x − y , x n − ψ(y ))| ≤ C(1 + log |x − y | ) if n = 2, which is integrable over bounded regions in R n−1 . Set f 0 = E 0 * h 0 . To see that E 0 * h 0 is continuous it is now enough to observe that the function x → E 0 x − y , x n − ψ(y ) is smooth for y = x and that we can make as small as we wish by choosing δ sufficiently small.
If n ≥ 3, the fact that E(x) tends to zero at infinity immediately implies that E * h(x) tends to zero at infinity. If n = 2 we have to use the assumption that the mean of h is zero and the fact that the gradient of E(x) tends to zero at infinity. To prove the latter fact we note that ∂ j E(y) = i y jÊ (y) is a homogeneous distribution in R 2 of degree −1. Therefore ∂ j E(x) must also be homogeneous of degree −1, and since E(x) is smooth outside the origin, this proves the claim.
If we denote the total mass of the measure h by h M and the diameter of Ω by diam(Ω) we can now write for large |x| for some C, which completes the proof.
Proof of Theorem 3.1. To study the solution w to equation (26) we shall use Lemma 3.2. Note that δd is a second order partial differential operator with constant coefficients that operates on C ∞ (R n ; S m R n ). Hence δd is a convolution operator, δdw = K * w, where K is a distribution supported at the origin with values in the space of linear operators on S m R n . The Fourier transform of δdw = K * w is −j y i yŵ (y) = (2π) n/2K (y)ŵ(y). In other words, the characteristic polynomial, or symbol, of δd is equal to the operator valued function (2π) n/2K (y) = −j y i y . Note that the operator j y i y can be represented by a matrix of second order homogeneous polynomials with respect to some basis for S m R n , for instance the basis induced in S m R n by the standard basis of R n . Actually we need only know that the operator j y i y on S m R n is invertible for every fixed y = 0, because that implies that its inverse can be represented by a matrix whose every entry is a homogeneous function of y of degree −2 that is smooth outside the origin. But more precisely, it is known that the inverse of j y i y can be written [16,Lemma 7.6] Since j y i y /|y| 2 is homogeneous of degree zero, this formula confirms that (j y i y ) −1 is homogenous of degree −2 and smooth outside the origin. It now follows that a solution w of the equation (26) can be given by a fundamental solution E(x) defined by (2π) n/2Ê (y) = −(j y i y ) −1 , where each entry of the matrix E(x) has a Fourier transform that is homogeneous of degree −2 and is smooth outside the origin. Applying Lemma 3.2 to every component of the tensor field w now finishes the proof of Theorem 3.1.

4.
Comparison of Dirichlet integrals. The second order differential operator can be called the Laplacian on symmetric tensor fields. It is very similar to the classical Laplacian ∆ (and coincides with ∆ in the case of m = 0). In particular, for a smooth bounded domain Ω ⊂ R n and for a tensor field h ∈ C(∂Ω; S m R n ), the interior Dirichlet problem  Let Ω = R n \ Ω. Under the same hypotheses on Ω and h, the exterior Dirichlet problem is posed, in the case of n ≥ 3, by In the case of n = 2, this should be replaced with

Theorem 4.2.
Let Ω ⊂ R n be a smooth convex bounded domain. Given a tensor field h ∈ C(∂Ω; S m R n ), let v ∈ C ∞ (Ω; S m R n ) be the solution to the interior Dirichlet problem (29) and let u ∈ C ∞ (Ω ; S m R n ) be the solution to the exterior Dirichlet problem (30)-(31). The inequality holds with some constant C that depends on (Ω, m, n) but is independent of h.
This theorem will be proved at the end of the current section for m = 0, and in the next section for an arbitrary m. Now, we are going to demonstrate that Proposition 2.5 (and hence the Main Theorem) follows from Theorem 4.2.
Proof of Proposition 2.5. Let Ω ⊂ R n be a smooth strictly convex bounded domain. Given a tensor field f ∈ C ∞ (Ω; S m R n ) satisfying (19), we again extend it by zero outside Ω and let f • ∈ L 2 (R n ; S m R n ) be the extension. Let (23) be the decomposition of f • into solenoidal and potential parts according to Proposition 2.3. Recall that s f • ∈ L 2 (R n ; S m R n ), w ∈ H 1 1 (R n ; S m−1 R n ) and dw ∈ L 2 (R n ; S m R n ).

By Proposition 2.4,
. Repeating our arguments after formula (21) (the support of If is a subset of a fixed compact) and using If = If • , we write this estimate in the form Since f • | Ω = 0, equation (23) implies that s f • | Ω = dw| Ω . Therefore the latter estimate can be written as Applying the operator δ to equation (23), we obtain By Theorem 3.1, w is continuous in R n . Set h = w| ∂Ω ∈ C(∂Ω; S m−1 R n ). Since δf • = 0 in Ω and f • ≡ 0 in Ω , equation (35) means that w solves both the interior and exterior Dirichlet problems with the boundary condition w| ∂Ω = h. Assuming the validity of Theorem 4.2 for rank m − 1 tensor fields, we obtain the estimate with a constant C 1 independent of f . Together with (34), this gives (37) dw 2 L 2 (R n ;S m R n ) ≤ (C 1 + 1)C If 2 H 1/2 (T S n−1 ) . Finally, we derive from (23) and use estimates (33) and (37) to obtain

We have thus proved (20).
We finish the Section by proving Theorem 4.2 for m = 0 and getting some progress for m > 0.
In the two-dimensional case, the Dirichlet integral of a scalar function is invariant under a conformal transformation of a domain, this is well known. This fact is not true in dimensions n > 2. Nevertheless, the rule for transformation of the Dirichlet integral under a conformal transformation of a domain can be easily found and looks as follows. where ψ (y) is the Jacobi matrix of the map ψ.
We omit the easy proof of this statement. Now, we are going to apply (38) to the case when ϕ is the inversion with respect to a sphere. Fix r ∈ (0, 1) and set Then One can easily see that x is the eigenvector of the matrix in parentheses on the righthand side associated to the eigenvalue −1, while x ⊥ is the eigenspace of the matrix associated to the eigenvalue +1. Therefore det ϕ (x) = −|x| −2n and |det ψ (y)| = |y| −2n for ψ = ϕ −1 . Applying (38), we obtain for a function u ∈ C 1 (B 1/r ) and Proof of Theorem 4.2 in the case of m = 0. Let Ω ⊂ R n be a smooth convex bounded domain. Without loss of generality, we can assume that 0 ∈ Ω. Choose 0 < r < 1 such that B r ⊂ Ω and B 1/r ⊂ Ω . There exists a diffeomorphism ϕ : Ω \ {0} → Ω such that ϕ| ∂Ω = Id and ϕ(x) = 1 |x| 2 x for x ∈ B r . We omit the easy proof of the latter statement. (More generally, such a diffeomorphism exists if Ω is diffeomorphic to a ball, although the proof is more complicated. Otherwise, if Ω is not homeomorphic to a ball, there can be a topological obstruction for the existence of such ϕ.) Fix such a diffeomorphism. Now, let h be a continuous function on ∂Ω. Let u be the solution to the exterior Dirichlet problem (30) (or (31) in the case of n = 2) and let v be the solution to the interior Dirichlet problem (29). Set Since v is a harmonic function in Ω and v| ∂Ω = w| ∂Ω , by the Dirichlet principle, It remains to prove that with a constant C independent of u. Let us rewrite this inequality in the form (41) By (39), the first integral on the left-hand side of (41) is not more than the first integral on the right-hand side times r −2(n−2) . It remains to prove that Since Ω \ B r and Ω \ B 1/r are compact domains, the latter inequality must hold for some constant C independent of u. 1 Finally, let us discuss the case of an arbitrary m in Theorem 4.2. For a rank m symmetric tensor field v let ∇v be the (not symmetric) tensor field of rank m + 1 defined in Cartesian coordinates by (∇v) ji1...im = ∂v i1...im /∂x j . Thus, dv is obtained from ∇v by symmetrization. In particular, |dv| 2 ≤ c|∇v| 2 with a constant c depending on (m, n) only.
Let again v and u be solutions to the interior and exterior Dirichlet problems respectively with the same boundary value h ∈ C(∂Ω; S m R n ). Unlike (32), the estimate can be easily proved by the same arguments as in the above-presented proof for m = 0. Indeed, replacing definition (40) with w i1...im = u i1...im • ϕ and repeating the same arguments for every component of u and w, we obtain By Proposition 4.1,

Inequalities (43) and (44) imply (42).
Comparing (32) with (42), we see that the proof of Theorem 4.2 is reduced to proving the estimate This is the subject of the next section. 5. The Korn inequality. The (second) Korn inequality is of a great importance in elasticity theory. Korn's original proof [7] is very complicated and hard to understand. Several other proofs are known under different assumptions on the regularity of a domain [2,3,4,11,13,19]. All these papers consider vector fields (i.e., the case of m = 1 in our terminology) on a bounded domain. In [6], the Korn inequality is proved for some unbounded domains. In particular, [6, Theorem 3, Section 3] implies (45) (and hence proves Theorem 4.2) in the case of m = 1. At the same time, some examples of unbounded domains are presented in [6] for which the Korn inequality is not true (the simplest of such examples is the slab {x ∈ R n | 0 < x n < 1}).
We need the Korn inequality for an unbounded domain and for an arbitrary m. The following version of the Korn inequality is sufficient for us.
Let Ω ⊂ R n be a connected unbounded domain such that the boundary ∂Ω is a compact smooth hypersurface in R n . Assume a tensor field u ∈ C 1 (Ω; S m R n ) to be such that the Dirichlet integral Ω |du(x)| 2 dx is finite and |u(x)| → 0 as |x| → ∞, then u ∈ H 1 (Ω; S m R n ) and the estimate holds with a constant C independent of u. Given h ∈ C(∂Ω; S m R n ), let u ∈ C ∞ (Ω; S m R n ) be the solution to the exterior Dirichlet problem (30)-(31). We can assume without loss of generality that Ω |du(x)| 2 dx < ∞, otherwise there is nothing to prove. The unbounded domain Ω = Ω and u satisfy the hypotheses of Proposition 5.1. Applying statement (47) to ( Ω, u), we obtain estimate (45). As has been shown at the end of the previous section, (45) finishes the proof of Theorem 4.2.
Our proof of Proposition 5.1 follows the approach proposed by Duvaut and Lions [1, Chapter III, Section 3] in their proof of the classical Korn inequality. We present a short scheme of the proof referring to [1] for details.
The following statement is the main step in the proof of Proposition 5.1.

Proposition 5.2.
Let Ω ⊂ R n be an open (either bounded or unbounded) domain such that the boundary ∂Ω is a compact smooth hypersurface in R n . Fix nonnegative integers k and satisfying k ≤ . If a function u ∈ H −k (Ω) is such that In the case of k = = 1 and of a bounded domain Ω, this statement is proved in [1, Chapter III, Theorem 3.2]. Our proof is a slight modification of the latter proof and is thus omitted. By the way, there is a remark in [1, Chapter III, Section 8] on the validity of the proposition for arbitrary integers k and .
Corollary 5.3. Let a domain Ω ⊂ R n be as in Proposition 5.2. If a tensor field u ∈ L 2 (Ω; S m R n ) is such that du ∈ L 2 (Ω; S m+1 R n ), then u ∈ H 1 (Ω; S m R n ).
Proof. Set v = du ∈ L 2 (Ω; S m+1 R n ) and consider the equality as an equation with unknown u and a given v. Being written in Cartesian coordinates, (49) is a linear system of first order PDEs. The following property of the system is well known: after m-multiple differentiation, the system can be solved with respect to all (m + 1)st derivatives of u. Let us reproduce the explicit formula from [14,Section 2.4]. To this end we introduce the short notation for partial derivatives u i1...im ; j1...jm+1 = ∂ m+1 u i1...im ∂x j1 . . . ∂x jm+1 .
Then (49) where σ(i 1 . . . i m ) is the symmetrization in the indices (i 1 , . . . , i m ). The right-hand side of (50) is a linear combination with constant coefficients of mth order derivatives of v. Therefore the right-hand side of (50) belongs to H −m (Ω). Now, (50) implies the statement: D α u ∈ H −m (Ω; S m R n ) for every α satisfying |α| = m + 1. By Proposition 5.2, u ∈ H 1 (Ω; S m R n ). Now, we repeat arguments from the proof of [1, Chapter III, Theorem 3.1] and obtain the following holds with some constant C independent of u.
The second term on the right-hand side of (51) cannot be omitted in the general case. Nevertheless, the term can be omitted under some additional assumptions on the tensor field u. For example, Theorem 3.3 of [1, Chapter III] states that the second term can be omitted if u vanishes on a part of the boundary ∂Ω. Here, we are interested in the case when Ω is an unbounded domain and u decays at infinity. Let us discuss this case in more details.
A tensor field-distribution w ∈ D (Ω; S m R n ) is called a Killing tensor field if dw = 0. As is seen from (50), such a field satisfies in Cartesian coordinates D α w i1...im = 0 for |α| > m.
Therefore every coordinate w i1...im (x) is a polynomial of degree ≤ m in x. In particular, w ∈ C ∞ (Ω; S m R n ). If Ω is a connected domain, every Killing tensor field on the domain is uniquely determined by the values D α w(x 0 ) (|α| ≤ m) at an arbitrary point x 0 ∈ Ω. In particular, the space of rank m Killing tensor fields on a connected domain has a finite dimension.
Repeating arguments from the proof of [1, Chapter III, Theorem 3.4], we prove the following statement. Alternatively we can recall the well known fact that the inequality (51) holds without the last term on the right hand side if and only if the operator is injective, and apply this fact to the operator modulo its null-space.
Theorem 5.5. Let Ω ⊂ R n be an open (either bounded or unbounded) connected domain such that the boundary ∂Ω is a compact smooth hypersurface in R n . For every integer m ≥ 0, there exists a constant C such that the following statement is valid. If a symmetric tensor field-distribution u ∈ D (Ω; S m R n ) is such that du ∈ L 2 (Ω; S m+1 R n ), then there exists a Killing tensor field w ∈ C ∞ (Ω; S m R n ) such that u − w ∈ H 1 (Ω; S m R n ) and the following inequality holds: (52) u − w 2 H 1 (Ω;S m R n ) ≤ C du 2 L 2 (Ω;S m+1 R n ) . Proof of Proposition 5.1. Let Ω and u satisfy the hypotheses of Proposition 5.1. Under assumption (46), the Killing tensor field w on the left-hand side of (52) must be identically equal to zero, since a non-vanishing polynomial cannot tend to zero at infinity. Therefore (52) coincides with (47).