Logarithmic Sobolev and Shannon's inequalities and an application to the uncertainty principle

The uncertainty principle of Heisenberg type can be generalized via the Boltzmann entropy functional. After reviewing the \begin{document} $L^p$ \end{document} generalization of the logarithmic Sobolev inequality by Del Pino-Dolbeault [ 6 ], we introduce a generalized version of Shannon's inequality for the Boltzmann entropy functional which may regarded as a counter part of the logarithmic Sobolev inequality. Obtaining best possible constants of both inequalities, we connect both the inequalities to show a generalization of uncertainty principle of the Heisenberg type.

1. Logarithmic Sobolev inequality. We consider the relation between the sharp logarithmic Sobolev inequality with the generalized Shannon inequality and the uncertainty principle of Heisenberg type.
The logarithmic Sobolev inequality is a version of the Sobolev inequality in the Sobolev spaces. Let W 1,p (R n ) be the Sobolev space defined by W 1,p (R n ) = f ∈ L p (R n ); ∇f ∈ L p (R n ) , where 1 ≤ p ≤ ∞. When p = 2, we use the abbreviated notation H 1 = H 1 (R n ) = W 1,2 (R n ). Stam [17] first obtained the logarithmic Sobolev inequality in H 1 (R n ) and later on Gross [7] reconsidered the inequality with the Gaussian measure and showed a relation with the hypercontractivity of the semi-group in the probability theory. In particular it is considered as the infinite dimensional version of the Sobolev inequality for higher dimensions n ≥ 3: (1.2) Here Γ(·) denotes the Gamma function (cf. Rosen [13], Talenti [18]) . We first show the L 2 based inequality of the logarithmic Sobolev inequality due to Stam and Gross. The inequality is presented as the following form (see also Weissler [19], Lieb-Loss [10] and Ledoux [9]). Proposition 1.1 (The logarithmic Sobolev inequality). Let n ≥ 2. For any a > 0 and f ∈ H 1 (R n ), the following inequality holds The equality is attained by One can choose a ≥ 1 e as and then the inequality does not depend on the dimension and it is also valid for infinite dimensional case. The inequality (1.3) and its proof is deeply related with various important inequalities of Young and Hardy-Littlewood-Sobolev type as well as the hypercontractivity of the heat and other semi-group with the sharp constants (cf. Weissler [19]).
One can optimize the parameter a > 0 appearing in Proposition 1.1 and then the equivalent form of (1.3) is obtained as follows which is obtained by Weissler [19]: Corollary 1.2 (Sharp logarithmic Sobolev inequality). Let n ≥ 2 and f ∈ H 1 (R n ). Then the equivalent form of the inequality (1.4) The constant in the right hand side is the best possible and it is attained by for any µ > 0 and x 0 ∈ R n .
and optimize the right hand side with the parameter a > 0. Since , the minimum of the right hand side is To see the optimality, by substituting |f (x) On the other hand the right hand side of (1.5) is .
Hence for any µ > 0, G µ (x) attains the best possible constant.
On the other hand, to derive the original inequality, we divide the both side of (1.4) by f 2 to see Then introducing a parameter a > 0, we have  The above observation implies that the sharp form of the logarithmic inequality can be extended into the dimension free version of the inequality (1.3).
The logarithmic Sobolev inequality also holds not only on the Euclidian spaces with the Lebesgue measure but with the Gauss measure. The both exposition is equivalent (see for instance, Lieb-Loss [10]). One of the generalization connecting to the version with the Gaussian measure is given by the L 1 form of the inequality as follows: Substituting f Corollary 1.4. Let n ≥ 2. For any non-negative f ∈ L 1 (R n ) with f The constant in the right hand side is the best possible and it attains by The resulting inequality can be seen by the following normalized form: for any non-negative f with f 1/2 ∈ H 1 (R n ) and f 1 = 1.
2. Logarithmic Sobolev inequality in L p . Weissler [19] modified the Stam-Gross logarithmic inequality into L p based spaces with using the heat evolution semi-group. Del Pino-Dolbeault [6] obtained the inequality and the best possible constant (Ledoux [8] for the case p = 1).
, [8]). Let 1 ≤ p < n and 1/p is the best possible. Besides the best constant is attained by e −c n,p |x| p with up to translation and dilation.
LetẆ 1,p (R n ) be the homogeneous Sobolev space defined bẏ Del Pino-Dolbeault [6] utilize the best constant for the Gagliardo-Nirenberg inequality: is the best possible, where Γ(·) denotes the Gamma function. The best constant is attained by up to dilation and translation.
For the completeness, we give the main part of the proof of Proposition 2.1. Proof of Proposition 2.1. We only show the case when 1 < p < n. The extremal case p = 1 can be treated in a different way (cf. [8]). Taking the logarithm of both sides of the inequality (2.4), and the limit of passing q → p, we have Indeed, by setting g(λ) = log f p λ−1 p−1 , h(λ) = log f λ , the first term of the left hand side of (2.6) is Letting u(λ) = p(λ−1) p−1 and g(λ) = h(u(λ)), Hence we obtain (2.7). Next we consider the second term of the left hand side of (2.6): Since the function in (2.5), i.e., attains the best possible constant of (2.4) (see [14]) and Recall that p is the Hölder conjugate exponent of p. The second term of the right hand side of (2.8) is We split the second term of the right hand side of (2.9) as Analogously to before, Since w q (x) → w(x) as q → p pointwisely and accordingly in L r and L q by a standard procedure, we see that III = IV , and hence Since w is explicitly given, we see the exact values of those norms as follows: Hence we conclude that The non-normalized version of (2.1)-(2.2) is given by The L 1 version of the inequality (2.10) now reads as follows: Equivalently, is the best possible constant.
One can obtain the constant independent of the dimension as follows: is a constant independent of the dimension.
Applying the Young inequality (2.14) in Lemma 2.5, then by free parameters a > 0 and b > 0, . .

(2.15)
From (2.15), and choosing a such that One can observe that if 2 ≤ p then p ≤ 2 and from (2.16), 3. Shannon's inequality. We introduce the weighted Lebesgue space L p s (R n ) as L p s (R n ) = f ∈ L p loc (R n ); x s f ∈ L p (R n ) . Shannon [15] (cf. [16]) obtained an inequality between the informatics entropy and the second moment of information variables. The continuous analogue is well known in the case L 2 space (cf. Beckner-Pearson [2]). We show the L 1 b generalization of the Shannon inequality obtained in Ogawa-Wakui [12] (cf. Bercher [3] and [4]).
Theorem 3.1 (Shannon). Let n ≥ 1 and b > 0. For any non-negative function wherex is the b-th moment center of f , i.e., and is the best possible. Besides the best constant is attained by e −c n,b |x| b up to translation and dilation, wherec n,b is given by (2.3).
Proof of Theorem 3.1. A proof for the case b = 2 can be seen in [11]. To show (3.1), it suffices to show that for f ∈ L 1 b (R n ) withx = 0 and f 1 = 1, and setting λ n = f 1 yields the above inequality (3.1). Let B b (x) = e −c n,b |x| b with the constantc n,b chosen as B b 1 = 1. By Jensen's inequality, Hence (3.7) Optimizing (3.7) by setting Finally we notice that B b (x) = exp −c n,b |x| b attains the best possible constant of the inequality (3.5). To see this, it suffices to show that B b (x) = e −c n,b |x| b gives the equality of (3.6).
and this yields that µ b = 1. This proves that the inequality (3.5) is sharp.
Remark 3.2. The original version of the Shannon inequality is given for the discrete version as As a corollary of Theorem 3.1, we obtain the following.
Corollary 3.3. Let n ≥ 2, b > 0 and c n,b be defined in (3.4). Then for any non-negative function f ∈ L 1 b (R n ), f (x)dx. The constant c n,b on the right hand side is the best possible and given by (3.4).
Remark 3.4. The b-th moment center appears in (3.8) is the one that f is not restricted in f ≤ 1.

4.1.
Heisenberg's uncertainty principle. Heisenberg's uncertainty principle is identified by See, for more relevant generalization, Beckner [1]. On the other hand, the sharp logarithmic Sobolev inequality combined with the generalized version of the Shannon inequality yield a generalization of the uncertainty principle involving the Boltzmann-Shannon entropy functional. We show the following generalization: Theorem 4.1. Let 1 < p < n and 1/p + 1/p = 1. For any non-negative function f ∈ L 1 p with f 1/p ∈ W 1,p . Then it holds that wherex is the p -th moment center of f given by (3.3) and b n,p and c n,p are given in (2.13) and (3.4), respectively. In particular, where n −1 is the best possible constant and it is attained by f (x) = e −c n,p |x| p with The inequality (4.1) is a direct consequence of (4.2) by setting p = p = 2, |f | 2 into f and notingx gives the minimum for the moment of f .
Let f ∈ C ∞ 0 (R n ), f (x) ≥ 0. The inequality (4.2) is known to obtain by the direct estimate for all 1 < p < ∞. Since div (x −x) = n, We illustrate a different method using the logarithmic Sobolev inequality and the generalized version of the Shannon inequality in L p .

6.2.
Connection from the Sobolev inequality to the logarithmic Sobolev inequality. The following argument is informed by one of the referee. The analogous observation is given by Beckner-Pearson [2]. The sharp Sobolev inequality implies the sharp logarithmic Sobolev inequality (1.4) as follows. Let f ∈ H 1 and for simplicity we assume that f 2 = 1. Then by (1.1) and Jensen's inequality, 2 n − 2 R n |f | 2 log |f | 2 dx ≤ log where S b is the best possible constant for the Sobolev inequality given by (1.2). Plugging g(x) = f (x 1 )f (x 2 ) · · · f (x k ) forx ∈ R kn withx = (x 1 , x 2 , · · · , x k ) and x i ∈ R n into (6.4), we see by noticing g 2 L 2 (R kn ) = f 2k L 2 (R n ) = 1 and ∇g 2 L 2 (R kn ) = k ∇f 2 L 2 (R n ) that k R n |f | 2 log |f | 2 dx = Namely we obtain with using the Stirling formula n! √ 2ene −n n n that Observing that k → ∞ we obtain the sharp inequality (1.4). The above observation is based on the fact that the logarithmic Sobolev inequality is invariant for the independent event for the probabilistic object.