ATTRACTORS FOR A RANDOM EVOLUTION EQUATION WITH INFINITE MEMORY: THEORETICAL RESULTS

. The long-time behavior of solutions (more precisely, the existence of random pullback attractors) for an integro-diﬀerential parabolic equation of diﬀusion type with memory terms, more particularly with terms containing both ﬁnite and inﬁnite delays, as well as some kind of randomness, is analyzed in this paper. We impose general assumptions not ensuring uniqueness of solutions, which implies that the theory of multivalued dynamical system has to be used. Furthermore, the emphasis is put on the existence of random pullback attractors by exploiting the techniques of the theory of multivalued nonautonomous/random dynamical systems.

1. Introduction. In our paper [5] we analyzed the long-time behavior of solutions (more precisely, the existence of pullback attractors) for an integro-differential parabolic equation of diffusion type with memory terms, expressed by convolution integrals involving infinite delays and by a forcing term with bounded delay, which represent the past history of one or more variables. In particular, we focused on the following non-autonomous reaction-diffusion equation with memory with Dirichlet boundary condition, where x belongs to a bounded domain O ⊂ R N with smooth boundary, t ∈ R, the functions h and g satisfy suitable assumptions, and γ is given in a standard way as γ(t) = −γ 0 e −d0t with d 0 , γ 0 , r > 0. Instead of analyzing directly equation (1), we developed in [5] an abstract theory for a more general type of non-autonomous evolution equations which included as a particular case the previous one. These types of equations are important in many physical phenomena as it is assumed they are better described if one considers in the equations of the model some terms which take into account the past history of the system (see, e.g. [15] and the references therein). Although, in some situations, the contribution of the past history may not be so relevant to significantly affect the long time dynamics of the problem, in certain models, such as those describing high viscosity liquids at low temperatures, or the thermomechanical behavior of polymers (see [11], [15] and the references therein) the past history may play a nontrivial role. However, it is also reasonable to think that certain models from the real world are more realistic if some random or stochastic terms are also considered in the formulation. In this respect, in the paper [2] the asymptotic behavior of a stochastic version of equation (1) (with an additive noise) and with conditions ensuring uniqueness of the Cauchy problem was analyzed. Namely, the problem is considered, where W (t, ·) is an appropriate Hilbert valued Wiener process (see [2]) and the functions h and g are independent of time and delay. The technique used in [2] is based in the well-known Dafermos transformation (see [10]) and the existence of a random attractor is also proved.
In this paper we aim to consider a more general case in which the terms in the equations do not necessarily ensure uniqueness of solutions (and henceforth we will have a multivalued dynamical system), they may contain some kind of delay and randomness can be present as well. In fact, we are interested in studying a random problem which can come from a genuine stochastic model after having performed a conjugation procedure. In the cases of additive or multiplicative linear noise, the so-called Ornstein-Uhlenbeck (O-U) process plays a key role in this conjugation (see, e.g. [3] for more details). However, we will not focus on performing such transformation explicitly in this paper. Instead, we will consider a rather general abstract evolution equation containing delay (both bounded and unbounded) and some random terms, which will have the special structure provided when one uses the aforementioned change of variable by means of the (O-U) process. Furthermore, our objective is to tackle our problem by exploiting the techniques from the theory of multivalued non-autonomous/random dynamical systems without using the Dafermos transformation. We intend to apply the results of this work to a stochastic perturbation of equation (1) in a forthcoming paper.
It is worth mentioning that the problem in which we are interested in (see equation (5)) has a non-autonomous character, which means that for each fixed random parameter we can study a non-autonomous dynamical system and we can analyze the existence of a pullback attractor. However, when we are interested in the random problem, the significant concept to worry about is the concept of random pullback attractor. But this requires of more technicalities and the analysis of measurability properties which make a big difference between the current paper and [5].
The content of the paper is the following. In Section 2 we recall some basic concepts and results from the theory of pullback and random attractors in the setvalued or multivalued set-up. The description of the problem to be analyzed in the paper is included in Section 3, in particular, we consider a random evolution equation with delays with very general coefficients. The existence of solutions to our random problem as well as the existence of absorbing sets for the multivalued non-autonomous/random dynamical system generated by the problem is proved in Section 4, while the compactness of the dynamical system is proved in Section 5, ensuring the existence of a non-autonomous pullback attractor. But, to prove that this pullback attractor and the multivalued non-autonomous dynamical system are random ones, it is necessary to prove extra measurability properties of both the dynamical system and the attractor. This is carried out in Section 6.
2. General theory of pullback and random attractors. We will recall the general theory of pullback attractors for set-valued non-autonomous and random dynamical systems.
A pair (Ω, θ) where θ = (θ t ) t∈R is a flow on Ω, that is, is called a non-autonomous perturbation. Let P := (Ω, F, P) be a probability space. On P we consider a measurable non-autonomous flow θ : In addition, P is supposed to be ergodic with respect to θ, which means that every θ t -invariant set has measure zero or one for t ∈ R. Hence P is invariant with respect to θ t . The quadruple (Ω, F, P, θ), which is the model for a noise, is called a metric dynamical system.
We also assume that Ω is a Polish space with the metric d Ω and that F is the Borel σ-algebra of Ω.
If we replace in the definition of a metric dynamical system the probability space P by its completion P c := (Ω,F P ,P) the above measurability property is not true in general (see Arnold [1,Appendix A]). But for fixed t ∈ R we have that the mapping θ t : (Ω,F P ) → (Ω,F P ) is measurable.
Let X = (X, d X ) be a Polish space and let us denote by P (X) (P f (X)) the set of all non-empty (non-empty closed) subsets of the space X. Let D : ω → D(ω) ∈ P (X) be a multivalued mapping describing a family of sets parameterized by ω. Such family is said to be closed if D(ω) ∈ P f (X) for any ω ∈ Ω.
Let D : ω → D(ω) ∈ P f (X) be a multi-valued mapping in X over P. This mapping is called a random set if is a random variable for every x ∈ X. It is well known that a mapping is a random set if and only if for every open set O in X the inverse image {ω : D(ω) ∩ O = ∅} is measurable, i.e., it belongs to F, see [14,Proposition 2.1.4].
Clearly, all this is also valid if we replace P by P c . It is also evident that if D is a random set with respect to P, then it is also random with respect to P c .
We now introduce non-autonomous and random dynamical systems.
An MNDS is called a multi-valued random dynamical system (MRDS) if the multi-valued mapping (t, ω, Our first aim is to formulate a general sufficient condition ensuring that an MNDS defines an MRDS. To do that, we introduce an adequate concept of continuity for multivalued mappings: we say that G(t, ω, ·) is upper-semicontinuous at x 0 if for every neighborhood U of the set G(t, ω, x 0 ) there exists δ > 0 such that if d X (x 0 , y) < δ then G(t, ω, y) ⊂ U. In general G(t, ω, ·) is called upper-semicontinuous if it is upper-semicontinuous at every x 0 in X. It is not difficult to extend this definition and obtain upper-semicontinuity with respect to all variables.
In order to define the concept of attractor (both pullback and random) we need to recall some definitions.
Let D be a family of multivalued mappings D : ω → D(ω) ∈ P f (X). We say that a multivalued mapping K is pullback D-attracting if for every D ∈ D, lim t→+∞ dist X (G(t, θ −t ω, D (θ −t ω)), K(ω)) = 0, for all ω ∈ Ω, where by dist X (A, B) we denote the Hausdorff semi-distance of two non-empty sets A, B: The family B : ω → B(ω) is said to be pullback D-absorbing if for every D ∈ D and ω ∈ Ω there existst =t(ω, D) > 0 such that Throughout this work we always consider a particular system of sets as in [17]. Namely, let D be a set of multivalued mappings D : ω → D(ω) ∈ P f (X) satisfying the inclusion closed property: suppose that D ∈ D and let D ∈ P f (X) be a multivalued mapping such that D (ω) ⊂ D(ω) for ω ∈ Ω, then D ∈ D. The reason to consider such a system of sets is that then we will have a unique attractor in D.
An example of a family satisfying this property is the so-called familty of tempered sets. Let us consider the system D given by the multi-valued mappings D : ω → D(ω) ∈ P f (X) with D(ω) ⊂ B X (0, (ω)), the closed ball with center zero and radius (ω), which is supposed to have a subexponential growth ( is tempered), i.e. lim t→±∞ log + (θ t ω) t = 0 for ω ∈ Ω.
D is called the family of subexponentially growing (or tempered) multi-functions. It is obvious that this family is inclusion closed.
Let us give the definition of global random pullback D-attractor.
Definition 2.3. The family A ∈ D is said to be a global pullback D-attractor for the MNDS G if it satisfies: A is said to be a strict global pullback D-attractor if the invariance property in the third item is strict.
If, moreover, G is an MRDS and A is a random set with respect to P c , then A is called a random global pullback D-attractor. Now we can formulate the following theorem, proved in [4].
Theorem 2.4. Suppose that the MNDS G(t, ω, ·) is upper-semicontinuous for t ≥ 0 and ω ∈ Ω. Let B ∈ D be a multi-valued mapping such that the MNDS is D-asymptotically compact with respect to B, i.e. for every sequence t n → +∞ and ω ∈ Ω, we have that every sequence y n ∈ G(t n , θ −tn ω, B(θ −tn ω)) is relatively compact in X. In addition, suppose that B is pullback D-absorbing. Then, the set A given by is a pullback D-attractor. Furthermore, A is the unique element from D with these properties. In addition, if G is a strict MNDS, then A is strictly invariant. Moreover, assume that G is an MRDS, ω → G(t, ω, B(ω)) is a random set with respect toF P (respectively F) for t ≥ 0, and also that G(t, ω, B(ω)) is closed for all t ≥ 0 and ω ∈ Ω. Then the set A defined by (3) is a random set with respect to P c (respectively P).

3.
Setting of the problem. Let O ⊂ R N be an open bounded set with C ∞smooth boundary. Denote, as usual, . The norms of these spaces are denoted by · and · V , respectively, and the scalar products by (·, ·) and ((·, ·)). We shall use ·, · for the pairing between the spaces V (the dual space of V ) and V , and ·, · q,p for the pairing between L q (O) and L p (O), where 1 p + 1 q = 1 with p ≥ 2. Let A be a positive symmetric operator on H with compact inverse. Then the eigenfunctions of A, denoted by {w i } i∈N , form a complete orthonormal system on H. By a bootstrap argument these elements are sufficiently smooth, in particular included in L p (O). The associated eigenvalues, denoted by 0 < λ 1 ≤ λ 2 ≤ · · · → ∞, have finite multiplicity. We consider the space whereû i = (u, w i ). In the sequel, A will be −∆, where ∆ is the Laplacian operator endowed with homogeneous Dirichlet boundary conditions. Hence, D(A 1 2 ) = V and A : V → V is linear and continuous. Hence, for u ∈ V it is well known that Next we define the different phase spaces that will have an important role in our investigations. We consider the space L 2 (−∞, r; V ), r ∈ R, of square integrable functions with values in V and with the measure e λ1s Leb, where Leb is the standard Lebesgue's measure. The space L 2 (−∞, r; V ) is equipped with the following norm This Banach space is separable, see e.g. [13]. From now on, we denote L 2 In this paper we are interested in studying the existence of solutions, as well as the longtime behavior of the following random delay system where T > 0, and the initial condition ψ ∈ H. That u agrees with the initial condition ψ in the system (5) means that By u t (·) we denote as usual the segment function on H defined by u t (s) = u (t + s), for t ∈ R + and s ∈ [−h, 0].
It is worth mentioning that equation (5) includes, as a particular case, the random evolution equation obtained from equation (2) if we perform a suitable change of variable involving an (O-U) process (see [2] for more details on this Hilbert valued (O-U)).
The assumptions on the different functions appearing in (5) are the following.
are continuous in their respective spaces (ω is fixed for g and f ), and for arbitrary are jointly measurable with respect to their arguments, see Castaing and Valadier [8], Chapter 3. We observe also that since Ω is separable, for the function h condition (7) implies (8).
Moreover, assume the following inequalities where η, ν, K > 0 and c i : Ω → R + are measurable with respect to F. Also, the functions t → c 1 (θ t ω), t → c 2 3 (θ t ω) are assumed to be integrable on any finite interval and subexponentially growing (that is, tempered), whereas t → c 2 (θ t ω), t → c 5 (θ t ω) are integrable on any finite interval. For c 4 we suppose that E c 2 4 < ∞ (and then t → c 2 4 (θ t ω) is integrable on any finite interval by the ergodic theorem), so that On the other hand, and as a consequence of the Young inequality, we have where µ > 0 can be chosen arbitrarily and C µ denotes a positive constant. In particular, we take µ such that Note that from (12) it is straightforward that For f and t > 0 we also assume the following inequalities for u, v ∈ L 2 (−∞, t; V ), ω, ω ∈ Ω, where d, b < 1, and c 6 : Ω → R + , c 7 : Ω×Ω → R + are measurable with respect to F and F ⊗ F, respectively, and the functions t → c 6 (θ t ω), t → c 7 (θ t ω, θ t ω) are integrable on any finite interval. Also, t → c 6 (θ t ω) is subexponentially growing. Moreover, for any t ∈ R, ε > 0 there exists Additionally, we assume that the maps are continuous for any fixed t ∈ R, and that for every ω 0 ∈ Ω, t ∈ R there exists a neighbourhood U of ω 0 and a constant C(t, ω 0 ) such that where j = 3, 4, i = 1, 2, 5, 6. It is obvious that for i = 1, 5 the property (19) implies condition (20).
and lim inf We would like to point out that, comparing with the nonautonomous case [5], we have introduced the new conditions (18) and u agrees with the initial condition ψ according to (6).
From this definition and the above conditions, it follows that du dt ∈ L 2 (0, T ; V )+ L q (0, T ; L q (O)). Then, see [18], u : [0, T ] → R is absolutely continuous and where ·, · denotes the pairing between V ∩L p (O) and its conjugate space V +L q (O). Therefore, (24) is equivalent to In what follows we shall obtain some a priori estimates in the spaces C h and L 2 V , that will be very useful when proving the existence of a weak solution by using Galerkin approximations, and the existence of an absorbing set as well.
Lemma 4.2. Every weak solution of (5) satisfies, for t ∈ [h, T ], the estimate where Remark 1. If we replace ψ(0) 2 by ψ 2 C h in the right-hand side of (26), then the inequality is true for all t ∈ [0, T ].
Proof. Using (4), (9), (11) and (13) we derive the following energy equality Hence, by using Gronwall's lemma we obtain By (16) it follows immediately that thus we have where c (ω) is given by (27). From this expression, for t ≥ h we obtain and then, denoting for v ∈ H we have or equivalently, Then, applying again Gronwall's lemma we get  Let B be a bounded set of H and let T > 0, ω 0 ∈ Ω be fixed. Then there exists a neighborhood U of ω 0 such that every weak solution u(·) = u(·, ω, ψ) of (5) with ψ ∈ B and ω ∈ U satisfies the inequality: where d (ω) = i∈{1,3,4,5} c i (ω) and C = C(B, T, ω 0 ) > 0 is a constant.
Proof. First of all, note that thanks to (26) and the assumption (20) we obtain that there exists a neighborhood U of ω 0 such that u t C h ≤ C 1 = C 1 (B, T, ω 0 ) for t ∈ [0, T ], ω ∈ U. Hence, arguing as in (28) we obtain Once again, by Gronwall's lemma On the other hand, by Lemma 4.2 and (20) we have that t , ω ∈ U. Therefore, in virtue of (11) and (15) we obtain For our further purposes we will also need the following technical lemma, whose proof can be found in [5].  where the coefficients γ n j are required to satisfy the following system: d dt (u n (t), w j ) + (Au n (t), w j ) = (f (θ t ω, u n t ) + h (θ t ω, u n t ) − g (θ t ω, u n (t)) , w j ), u n (t) = P n ψ (t) , for t ≤ 0, where γ (t) = (γ 1 (t) , ..., γ n (t)) ∈ X n . For the following we fix ω. It is easy to see that ψ ∈ H and γ m → γ in X n implies ψ m → ψ in H and ψ m (0) → ψ(0) in L p (O). Hence, the above continuity conditions (7) for the maps f, g, h imply that for a.a. t the map γ → F n (t, γ, ω) is continuous. Also, the measurability condition (8) together with the properties of θ t give us the measurability of t → F n (t, γ, ω) for any γ. On the other hand, by (10)- (12) for every (t 0 , γ 0 ) there is a neighborhood U t0,γ0,ω in R × X n and a Lebesgue integrable function m(t) such that Hence, Theorem 1.1 in [13, p.36] implies the existence of at least one local solution for (31). For the remainder of the proof, and in particular the compactness and the convergence argument, we refer the reader to [5].
We observe here that every weak solution of (5) can be extended to a globally defined one (i.e. for all t ∈ R) simply by concatenating solutions. Let then S (ψ, ω) be the set of all globally defined solutions to (5) corresponding to ψ ∈ H and ω ∈ Ω. We define the multivalued map G : R + × Ω × H → P (H) as follows The next lemma can be proved in a similar way to [7,Proposition 4] or [4, Lemma 5.1].
Let us prove the existence of an absorbing set in the space H. In what follows, we denote by B H (0, R) a closed ball in H centered at 0 with radius R. We can assume without lost of generality that c(ω) > 0 for all ω ∈ Ω.
In the following we need frequently (14).
and the random variable c(ω) has been defined in (27).

5.
Asymptotic compactness and the pullback attractor. Now, after an auxiliary lemma, we prove that the cocycle is pullback asymptotically compact.
where B is bounded in H, and ψ n → ψ weakly in L 2 V , ψ n (0) → ψ (0) weakly in H. Consider any sequence of weak solutions u n (·) = u n (·, ω n , ψ n ) with initial data ψ n corresponding to ω n , where ω n → ω. Then there exists a subsequence u n k and a function u such that u n k converges to u in C ([τ, T ] ; H) for all 0 < τ < T. Moreover, u n k → u weakly in L 2 (0, T ; V ) for all T > 0. Also, we have the following property: Hence, if ψ n → ψ in L 2 V , then u n k T → u T in L 2 V for any T > 0. If, moreover, ψ n → ψ in C h , then u n k → u in C ([−h, T ] ; H), for all T > 0, and u(·) = u(·, ω, ψ) is a solution of (5) corresponding to the initial data ψ and the parameter ω.
Further, since u : [0, T ] → H is continuous, we will obtain that u n → u in C ([τ, T ], H) for any 0 < τ < T if we check that u n (t n ) → u (t 0 ) strongly in H for any t n , t 0 ∈ [τ, T ]. As u (t 0 ) ≤ lim inf u n (t n ) , for this aim it is enough to obtain that lim sup u n (t n ) ≤ u (t 0 ) . (35) In view of Lemma 4.3 there exists a neighborhood U of ω such that for any ω n ∈ U the continuous functions are non-increasing in [0, T ]. Passing to the limit we obtain that u (·) is a solution of the following problem: We note that h (θ s ω n , u n s ) → ζ h weakly in L 2 (0, T ; H) and assumption (19) implies that ζ h satisfies where we have used that u n t C h ≤ C 1 = C 1 (B, T, ω) for all n and t ∈ [0, T ]. Then repeating the same steps as in Lemma 4.3 we obtain that the continuous function is also non-increasing in [0, T ]. Moreover, u n → u strongly in L 2 (0, T ; H) and (19) implies, passing to a subsequence, that J n (t) → J (t) for a.a. t ∈ (0, T ). Therefore, using Lemma 4.4 we get (35).
Finally, we apply a diagonal argument to prove that the result is valid in an arbitrary interval 0 < τ < T .
If, moreover, ψ n → ψ in C h , then arguing as before one can check that u n → u in C ([−h, T ] ; H) . Hence, it follows from (7) that h (θ · ω, u · ) = ζ h , and then u is a solution of (5) corresponding to the initial data ψ.
Finally, we shall obtain (33), which implies that if ψ n → ψ in L 2 V , then u n T → u T in L 2 V for any T > 0. Indeed, the difference v n = u n − u satisfies Applying Gronwall's lemma and taking into account assumption (17) T 0 e −λ1(T −s) g(θ s ω n , u n (s)) − g(θ s ω, u(s)), v n (s) q,p ds.
Using ψ n (0) → ψ (0) in H, condition (18) and (34) we finally arrive at lim sup As a consequence of Lemmas 4.6 and 5.1 we obtain the following result.
Corollary 1. The map G has compact values. The map ψ → G (t, ω, ψ) has closed graph in H and is upper semicontinuous for fixed ω ∈ Ω, t ≥ 0.
The next step is to prove the asymptotic compactness.
Lemma 5.2. The MNDS G is pullback D-asymptotically compact.
Proof. Let y n ∈ G (t n , θ −tn ω, D (θ −tn ω)), where D ∈ D and t n → +∞. Then we have to prove that the sequence y n is precompact in H. Observe that for T > 0, where B is the absorbing ball. Then y n ∈ G(T, θ −T ω, ξ T n ), where ξ T n ∈ B (θ −T ω). Let u n be a sequence of solutions with initial condition ξ T n and u n T = y n . Observe that y n (s − T ) = ξ T n (s) for a.a. s ≤ 0. Also, it is clear that u n depend on T , but we omit this for simplicity of notation.
Since B (θ −T ω) is bounded in H we can assume (up to a subsequence) that ξ T n → ξ T weakly in L 2 (−∞, T ; V ). Also, since y n ∈ B (ω) for n large enough, we have that y n → y weakly in L 2 (−∞, T ; V ), and y (s − T ) = ξ T (s) for a.a. s ≤ 0. Also, as the sequence ξ T n (0) is bounded in H, we can suppose without loss of generality that ξ T n (0) → ξ T (0) weakly in H. From the proof of Lemma 5.1 it follows that u n converges to some function u in the sense of (34). Also, it is clear from the above convergences that u (s) = y (s − T ) , for a.a. s ≤ T, and then u T = y in L 2 (−∞, 0; H). Lemma 5.1 implies, moreover, that u n → u in C ([τ, T ], H) , for all 0 < τ < T.
(37) Hence, if we take T > h, then we obtain that y n = u n T converges to y = u T in C h , so u T = y in H.
Finally, we need to prove that, up to a subsequence, y n → y strongly in L 2 V . Using (33) and ξ T n , ∈ B (θ −T ω), ξ T where R (ω) is the radius of B (ω) given by (32).
Using a diagonal argument one can prove that this inequality is true (passing to a subsequence) for every T > 0. Since B ∈ D, for any ε > 0 there exists T ε such that Thus, lim n→+∞ y n − y L 2 As a consequence of Lemmas 4.6, 4.7, 5.2, Corollary 1 and Theorem 2.4 we can ensure the existence of a pullback attractor for our problem.  (7), (17), (21), (22) and (23) just for ω fixed (see the non-autonomous case in [5]). In such a case Lemma 5.1 would be correct for ω fixed. These conditions are needed in the next section in order to prove that the pullback attractor is measurable.
6. The random attractor. We need to prove now that the MNDS G is in fact an MRDS, and that the pullback D-attractor is a random attractor.
Proof. It is enough to check that (t, ω, ψ) → G(t, ω, ψ) is upper semicontinuous (see [4,Lemma 2.5]), and this is true if we prove that for any sequence y n ∈ G(t n , ω n , ψ n ) such that t n → t, ω n → ω, ψ n → ψ, there exists a subsequence y n k converging in H to some y ∈ G(t, ω, ψ).
Take a sequence of solutions u n (·) = u n (·, ω n , ψ n ) satisfying y n = u n t n . Let t < T . In view of Lemma 5.1 there exists a subsequence u n k (·) such that where u(·) = u (·, ω, ψ) is a solution with the initial data ψ. Therefore, y n = u n t n → u t = y ∈ G(t, ω, ψ) in C h . It remains to prove that y n k → y in L 2 V . Let t n = t + τ n < T , where τ n → 0. We observe that By (38) it is straightforward to check that On the other hand, if v ∈ L 2 (−∞, T ; V ), then v τ (·) = v(·+τ ) → v(·) in L 2 (−∞, T − ε; V ) as τ → 0 for any ε > 0, see [12]. Hence, choosing ε > 0 such that t < T − ε we obtain which concludes the proof.
We note that G(2h, θ −2h ω, B(θ −2h ω)) ⊂ K(ω) for all ω ∈ Ω, and then for any ω ∈ Ω, D ∈ D, there exists T (ω, D) such that if t ≥ T . Therefore, the family K(ω) is pullback D-absorbing. Lemma 6.2. Assume that the radius R(ω) of the absorbing family B(ω) defined in Lemma 4.7 is locally bounded, that is, for any ω 0 ∈ Ω there exists a neighborhood U of ω 0 and a constant C(ω 0 ) such that Then for any fixed t ≥ 0 the map ω → G(t, ω, K(ω)) has closed values and is a random set with respect toF P .
Proof. We shall prove that ω → G(t, ω, K(ω)) has closed graph. We consider the sequence y n ∈ G(t, ω n , K(ω n )), where ω n → ω, y n → y in H. Then there are solutions u n (·, ω n , x n ) such that y n = u n t and x n ∈ K(ω n ). Since x n (s) = y n (s − t) for a.a. s ≤ 0 and y n → y in L 2 V , we deduce that x n → x in L 2 V , where x(s) = y(s − t) for a.a. s ≤ 0. We will check further that, up to a subsequence, x n → x in C h . In view of the definition of the set K(ω) one can choose ω n , x n ∈ G(2h, θ −2h ω n , B(θ −2h ω n )) such that There exist ψ n ∈ B(θ −2h ω n ) and solutions v n (·) = v n (·, ω n , ψ n ) such that x n = v n 2h . In view of condition (39), ψ n is bounded in H. Thus, passing to a subsequence we have that ψ n → ψ weakly in L 2 V and ψ n (0) → ψ(0) weakly in H. Lemma 5.1 implies then that v n → v in C([τ, 2h]; H) for any 0 < τ < 2h. Hence, x n = v n 2h → v 2h in C h . This, together with (40), gives and therefore x n → x in H. Moreover, from (40) we have x n → x in H, ω n → ω as well, so by the definition of K(ω) it is obvious that x ∈ K(ω). Now, using again Lemma 5.1 we obtain the existence of a solution u(·) = u(·, ω, x) such that y = u t , thus y ∈ G(t, ω, K(ω)), proving that the graph is closed. This for all ω ∈ U, |t| ≥ t. Also, assume that for every ω 0 ∈ Ω, t 0 ∈ R there exists a neighbourhood U of ω 0 and a constant D(t 0 , ω 0 ) such that t+t0 t c 2 4 (θ s ω)ds ≤ D, for any ω ∈ U, t ∈ R.
Then K ∈ D.
Proof. First, we ought to establish that the absorbing family B(ω) is locally uniformly tempered. We shall prove that for any ω 0 there exists a neighborhood U 0 of ω 0 such that for any δ > 0, lim t→+∞ sup ω∈U e −δt R 2 (θ −t ω) = 0.
The case where t → −∞ is treated similarly. Hence, K ∈ D.
We are ready to prove that the pullback D-attractor A defined in Theorem 5.3 is a random pullback D-attractor. Theorem 6.5. Assume that conditions (43)-(46) are satisfied. Then A is a random D-attractor.
Proof. From Lemmas 6.4, 6.1 we know that the family of absorbing sets K belongs to D and that the map (t, ω, ψ) → G(t, ω, ψ) is B(R + )⊗F ⊗B(H) measurable. Also, from Lemmas 6.2, 6.3 we have that for any fixed t ≥ 0 the map ω → G(t, ω, K(ω)) has closed values and is a random set with respect toF P .
Hence, Theorem 2.4 implies that and this set is random with respect to P c .