EXTENSION OF THE STRONG LAW OF LARGE NUMBERS FOR CAPACITIES

. In this paper, with a new notion of exponential independence for random variables under an upper expectation, we establish a kind of strong laws of large numbers for capacities. Our limit theorems show that the cluster points of empirical averages not only lie in the interval between the upper expectation and the lower expectation with lower probability one, but such an interval also is the unique smallest interval of all intervals in which the limit points of empirical averages lie with lower probability one. Furthermore, we also show that the cluster points of empirical averages could reach the upper expectation and lower expectation with upper probability one.

1. Introduction. Nowadays, capacities/non-additive (imprecise) probabilities or non-linear expectations have been studied by more and more experts in different fields, such as mathematical economics, statistics, quantum mechanics and finance (see [6], [10], [13], [15], [16], [20], [23], [27], [29] and references therein). In the last twenty years, many authors have studied extensions of the traditional limit theorems of probability theory to capacities/non-additive probabilities framework. Strong laws of large number take an important role in capacity/non-additive probability theory. In fact, this strand of research could date back to the early eighties (see [3] and [28]) but it has accelerated a lot recently due to its application in finance.
However, many interesting examples in [24], [26] and the references therein imply that the limit properties of empirical averages become very complicated when a probability measure is no longer additive. More references about law of large numbers for capacities/non-additive probabilities could also be found in Agahi et al. [1], Chen and Chen [4], Epstein and Schneider [14], Peng [19] and Rébillé [21]. Recently, De Cooman and Miranda [11], and Cozman [9] also study the laws of large numbers based on lower and upper expectations with the assumption of forward factorization which considers the bounded functions as the test functions.

ZENGJING CHEN, WEIHUAN HUANG AND PANYU WU
More specifically, given a non-empty set P of finitely additive probability measures on a measurable space (Ω, F), we can define a pair of capacities (V, v) by called upper probability and lower probability respectively. Immediately, for a given random variable X on (Ω, F), two kinds of nonlinear expectations can be denoted via capacities (V, v). One is a pair (C V , C v ) of Choquet integrals corresponding to capacity V = V and V = v, where the Choquet integral with respect to capacity V is defined by With a notion of independence relative to capacity, Maccheroni and Marinacci [17], Marinacci [18], Terán [25] and some of the references therein investigate the strong laws of large numbers via Choquet integrals with restrictive assumptions on sample space Ω and capacity V or v. For instance, Ω is a compact or Polish topological space, or capacity v is completely monotone, or at least 2-monotone. They show that the cluster points of empirical averages lie in the interval with lower probability one.
Notice the relation between Choquet integrals and upper-lower expectations as follows: for any random variable X, It is well-known that 2-monotone capacity is a lower probability while the inverse is not true. When v is a 2-monotone capacity then But if v is not a 2-monotone capacity then there exists a random variable X such that So, what we want to investigate is whether the cluster points of empirical averages lie in the interval [E[X], E[X]] with lower probability one. Chen [5] delves this question with a notion of Peng's independence (see [20]) of random variables under a sub-linear expectation without extra restrictive assumptions on Ω and capacities (V, v). Furthermore, Chen et al. [8] and Zhang [30] [31] study this question by relaxing the assumption of Peng's independence to product independence, negatively dependence and extended negatively dependence.
The aims of this paper are as follows. The first aim is to relax the notion of Peng's independence to that of exponential independence. It is easy to show that both Peng's independence and product independence are exponential independence. Then under the assumption of exponential independence, we show that the strong law of large numbers still holds. That is moreover, E[X 1 ] is the maximal value such that the above equality holds. Similarly, and E[X 1 ] is the minimal value such that the above equality holds.
The remainder is organized as follows. In section 2, we recall some basic concepts and related lemmas which will be used in this paper. Then we introduce the definition of exponential independence and give examples in which the upper and lower probabilities are continuous. In section 3, we state and prove the main result of this paper.
Definition 2.1. The set function V from F to [0, 1] is called a capacity if it satisfies the following properties (i) and (ii). The set function V is continuous from below or above if it satisfies the following property (iii) or (iv) respectively: For a capacity V , if V is both continuous from below and above, then V is called a continuous capacity.
Let ∆(Ω, F) denote the set of all finitely additive probabilities on F and ∆ σ (Ω, F) denote the set of all probabilities (σ-additive) on F. Every non-empty subset P ⊆ ∆(Ω, F) defines an upper probability V and a lower probability v by Definition 2.2. (Definition 3 in [12]). A set D is a polar set if V(D) = 0 and a property holds "quasi-surely" (q.s. for short) if it holds outside a polar set. Now we define the upper expectation E[·] and the lower expectation E[·] on (Ω, F) generated by P, for each F-measurable real random variable X such that E P [X] exists for each P ∈ P, (Ω, F, P, E) is called an upper expectation space and (Ω, F, P, E) is called a lower expectation space.

It is easy to check that E[X] = −E[−X] and E[·]
is a sub-linear expectation (more details can be found in [19] and [20]). In other words, E[·] satisfies the properties of monotonicity, constant preserving, positive homogeneity, sub-additivity and translation invariance. Actually, a sub-linear expectation is an upper expectation (see Lemma 2.4 in [19]).
In [19], Peng introduced the concept of independence under a sub-linear expectation E[·] that X n+1 being independent of (X 1 , · · · , X n ) under E[·] means that for each test function ψ ∈ C l,Lip (R n+1 ), Here and in the sequel N denotes the set of all the nonnegative integers and N * denotes the set of all the positive integers. In general, C l,Lip (R n+1 ) can be replaced by a smaller space C b,Lip (R n+1 ) which denotes the space of bounded Lipschitz continuous functions on R n+1 . The test functions are n + 1 dimensional functions which may be negative.
For study the strong law of large numbers, Chen et al. [8] introduced another definition of independence in which the test functions are 1 dimensional nonnegative measurable functions. In [8], random variable X n+1 being independent of (X 1 , · · · , X n ) under E[·] means that for each nonnegative measurable function Motivated by the relation of independence and characteristic functions or momentgenerating functions in classical probability theory, we introduce a notion of exponential independence which weakens the assumption of Peng's independence and the independence in Chen et al. [8].
Definition 2.3. Exponential independence: Let X 1 , X 2 , · · · , X n+1 be random variables on (Ω, F). Random variable X n+1 is said to be exponential independent of ( where C b,Lip (R) denotes all bounded Lipschitz functions on R. {X n } ∞ n=1 is said to be a sequence of exponential independent random variables, if X n+1 is exponential independent of (X 1 , · · · , X n ) for all n ∈ N * .
There are some random variables which are not exponential independent under E[·], but it may be negatively exponential dependent in the sense as follows.
Definition 2.4. Negatively exponential dependence: If "=" in (1) is replaced by "≤", then X n+1 is said to be negatively exponential dependent of (X 1 , · · · , X n ) under E[·]. {X n } ∞ n=1 is said to be a sequence of negatively exponential dependent random variables, if X n+1 is negatively exponential dependent of (X 1 , · · · , X n ) for all n ∈ N * . Remark 1. It is obvious that the Definition 2.3 is weaker than the independence in [8]. It is worth noticing that Definition 2.3 is also weaker than the independence in Peng's sense and the extended independence introduced by Zhang in [31] since for each ϕ(·) ∈ C b,Lip (R), we have e ϕ(·) ∈ C b,Lip (R).

Remark 2.
If (X 1 , · · · , X n+1 ) are bounded random variables, then X n+1 being exponential independent of (X 1 , · · · , X n ) under E[·] is equivalent to (1) hold for all ϕ i (·) ∈ C Lip (R), i = 1, · · · , n + 1, where C Lip (R) denotes the space of all Lipschitz continuous functions on R.  In the sequel, we only consider P ⊆ ∆ σ (Ω, F) since in this case, the upper and lower probabilities generated by P have one-side continuity as the following lemma shows.
Lemma 2.5. (Lemma 2.1 in [8]). Upper probability V being continuous from above/below is equivalent to lower probability v being continuous from below/above. If P ⊆ ∆ σ (Ω, F), then V is continuous from below and v is continuous from above. Proof. For any {A n } ∞ n=1 ⊆ F, A n ↓ A, we have P (A n ) converges to P (A), for all P ∈ ∆ σ (Ω, F). By the compactness of P and Dini's Theorem (see 2.66 Theorem in [2]), we have P (A n ) uniformly converges to P (A), that is, , for all A ∈ F. Then V is continuous from above. Therefore V and v are continuous by Lemma 2.5.
The following example shows that some special g-probabilities are continuous capacities. Let (V, v) denote the upper-lower probabilities generated by P. From Lemma 2 in Chen and Kulperger [7], we know that V and v are g-probabilities generated by the following backward stochastic differential equations respectively Because P is w*-compact (see Theorem 2.1 (d) in [6]), so these two g-probabilities V and v are continuous by the above Lemma 2.6.
3. The strong law of large numbers for capacities. This section is to investigate the strong law of large numbers in upper expectation space (Ω, F, P, E), where P ⊆ ∆ σ (Ω, F). The following theorem is the main result of this paper.
that is If further v is continuous, then (ii) and V lim inf and min a : V ω : a ∈ C S n (ω) n > 0 = µ, where C Sn(ω) n denotes the set of all limit points of sequence Sn(ω) n ∞ n=1 ; (iv) for any a, b ∈ R with a > µ or b < µ, we have v a ≤ lim inf n→∞ S n n ≤ lim sup n→∞ S n n ≤ b = 0.
The following corollary can be easily get from Theorem 3.1.
If v is continuous, then the interval [µ, µ] is the unique smallest interval such that (2) holds. The interval [µ, µ] is the smallest interval in the sense that any interval [a, b] ⊂ R with b − a < µ − µ, then equality (8) holds. The interval [µ, µ] is the unique interval in the sense that for any interval [a, b] ⊂ R with b − a = µ − µ, if b = µ, a = µ, then equality (8) holds.
In order to prove our main theorem, we firstly prove the following lemma and a strong law of large number under the assumption of negatively exponential dependence.

Lemma 3.2. Given upper expectation
be a sequence of negatively exponential dependent random variables with sup i≥1 E[|X i | 1+α ] < ∞ for some constant α > 0. Suppose that there exists a constant c > 0 such that Then for any m > 1, we have Proof. By choosing ϕ i (x) in Definition 2.4 as ϕ i (x) = m ln(1+n) , we can prove this lemma similar as the proof of Lemma 3.1 in [8]. We omit the details.  (2) and (3) hold.
Although the idea of proving Theorem 3.3 is similar to the idea of proving Theorem 3.1 in [8], we still give the proof of Theorem 3.3 since we use some different techniques caused by the weaker condition of independence. (2)

Proof of Theorem 3.3. It is obvious that
and V lim inf n→∞ S n n < µ = 0.
Next we prove (9) in two steps.
Step 1. This step is under the additional assumption that |X i − µ| ≤ 2i ln(1+i) for i ≥ 1 which makes {X i } ∞ i=1 satisfying the assumptions of Lemma 3.2.

ZENGJING CHEN, WEIHUAN HUANG AND PANYU WU
To prove (9), we first show that for any > 0, Let us choose m > 1/ , by Chebyshev inequality on upper expectation space (see Proposition 2.1 in [8]), By the first Borel-Cantelli Lemma (see Lemma 2.2 in [8]), equality (11) holds. Due to it follows from the monotonicity of V and (11) that V lim sup n→∞ S n n ≥ µ + 2 = 0.
By Lemma 2.5, V is continuous from below, therefore equality (9) holds, that is Step 2.
only satisfy the assumptions of Theorem 3.1. For all i ≥ 1, define and Meanwhile, for each i ≥ 1, it is easy to check that

Then by Jensen inequality on upper expectation space (see Proposition 2.1 in [8]), we have sup
Next we will show that for each n ≥ 1, X n+1 is negatively exponential dependent of (X 1 , · · · , X n ) under E[·]. For each ϕ i (·) ∈ C b,Lip (R) and i = 1, · · · , n + 1, definẽ Thenφ i (·) ∈ C b,Lip (R) and by the negatively exponential dependence of In other words, {X i } ∞ i=1 is a sequence of negatively exponential dependent random variables under E[·]. Consequently, {X i } ∞ i=1 satisfies the assumptions of Step 1. Set S n = n i=1 X i . By the reason of From the sub-additivity and translation invariance of E[·], we obtain Notice that we have

ZENGJING CHEN, WEIHUAN HUANG AND PANYU WU
We deduce from the Hölder and Chebyshev inequalities (see Proposition 2.1 in [8]) that Applying Kronecker Lemma (Lemma IV.3.2 in [22]), we have Now we want to show Similarly, by the Kronecker Lemma, we only need to prove for i ≥ 1. By Chebyshev inequality, we have Consequently, (17) holds.

STRONG LAW OF LARGE NUMBERS FOR CAPACITIES 185
Taking lim sup n→∞ on both sides of (15), due to (16) and (17) Hence, equality (9) holds, that is, Hence, (10) holds, that is, Therefore, the proof of Theorem 3.3 is completed.
Before we turn to the proof of Theorem 3.1, we firstly give a lemma which will be used in the proof.
Proof. By the sub-additivity of E[·] and Hölder inequality on upper expectation space, we have Therefore, Proof of Theorem 3.1. The proof of (i) is directly from Theorem 3.3 since exponential independent random variables must be negatively exponential dependent random variables. Now we prove (ii). If µ = µ, equalities (4) and (5) can be deduced from equality (2) directly. So we only consider µ > µ. By (i), we know V lim sup n→∞ S n n > µ = 0.
If the following equality (18) were to hold true That is (4) holds.
Therefore equality (5) holds. That is to say, in order to prove (ii) we only need to prove equality (18) holds. The proof of equality (18) will be divided to two steps.
Step 1. This step will prove equality (18) under the additional assumption |X i − µ| ≤ 2i ln(i+1) for i ≥ 1. For any 0 < < µ − µ, by the exponential independence of {X i } ∞ i=1 and Jensen inequality on upper expectation space, we have If the following inequality (19) were to hold true lim sup then lim n→∞ V S n n − µ ≥ − = 1.
Therefore, it follows from the continuity of V that So to finish Step 1, it only remains to prove (19) under the assumptions of Step 1. Actually, for any δ > 0, by the sub-additivity and monotonicity of E[·] and Jensen inequality, we have that is (19) holds.
Step 2. This step will prove inequality (18) for {X i } ∞ i=1 only satisfying the assumptions of Theorem 3.1.
We use the same truncation method as in the proof of Theorem 3.3 (Step 2). By the sub-additivity and translation invariance of E[·], we have Combining (13) and (14), we have Taking lim sup n→∞ on both sides of above inequality, due to equalities (16) and (17) Therefore, equality (18) holds. Now we turn to the proof of (iii). Firstly, from equality (4), we know µ ∈ a : V ω : a ∈ C S n (ω) n > 0 .
To prove (iv), if b < µ, then a ≤ lim inf Since v conjugates to V, we have proved equality (8) holds. The case of a > µ can be proved in the same way. The proof of Theorem 3.1 is completed.