Self-similar solutions of fragmentation equations revisited

We study the large time behaviour of the mass (size) of particles described by the fragmentation equation with homogeneous breakup kernel. We give necessary and sufficient conditions for the convergence of solutions to the unique self-similar solution.


Introduction
Fragmentation is a phenomenon of breaking up particles into a range of smaller sized particles, characteristic of many natural processes ranging from e.g. polymer degradation [33] in chemistry to breakage of aggregates [1] in biology. A stochastic model of fragmentation was given in [16] for the first time and since then it has been studied extensively with probabilistic methods, see [8,19,30,31] and the references therein. There is also a deterministic approach through transport equations and purely functionalanalytic methods [23,22,6,2,15,5,25], which we follow here. We denote by c(t, x) the number density of particles of mass (size) x > 0 at time t > 0. The equation describing the evolution of density is ∂c(t, x) ∂t = ∞ x b(x, y)a(y)c(t, y)dy − a(x)c(t, x), t, x > 0, (1) with initial condition c(0, x) = c 0 (x), x > 0, (2) where a(x) is the breakage rate for particles of mass x and b(x, y) is the production rate of particles of size x from those of size y. Both a and b are nonnegative Borel measurable functions. To ensure the conservation of the total mass the function b has to satisfy y 0 b(x, y)xdx = y and b(x, y) = 0 for x ≥ y.
If we let γ(y, x) = b(x, y)a(y) then (1) has the same form as in [23], where the reader can find a brief physical interpretation and the derivation of the equation. For a discussion of the model we also refer to [5,Chapter 8].
In this paper we provide necessary and sufficient conditions for the existence of self-similar solutions to the initial value problem (1)-(2) when a(x) = x α with α > 0 and the kernel b is homogenous, i.e., there exists a Borel measurable function h : (0, 1) → R + such that b(x, y) = 1 y h x y for 0 < x < y and 1 0 h(r)rdr = 1.
This means that the size x of fragments of particles is proportional to the size y of fragmenting particles, so that it is determined through the distribution of the ratio x/y and does not depend on y. We let E denote the open interval (0, ∞), E = B(0, ∞) be the σ-algebra of Borel subsets of (0, ∞), and L 1 = L 1 (E, E, m) be the space of functions integrable with respect to the measure m(dx) = x dx with the norm If a(x) = x α with α ≥ 0 then the total mass is conserved [22,3], so that if then c(t, ·) ∈ D(m) for all t > 0, and if α < 0 the solutions loose mass due to the so called "shattering" phenomenon [21,18,4,7]. For α > 0 we can represent [24,15] the solution c of the fragmentation equation (1) as and u is the solution of the following partial integro-differential equation Here the function ϕ and the linear operator P on L 1 are defined as In particular, if u * is a stationary solution of (4) then is said to be a self-similar solution of (1). It should be noted that c behaves as a delta-like function; see Remark 2. Our main result is the following.
Theorem 1. Let a(x) = x α with α > 0 and let b be as in (3). There exists a self-similar solution c * as in (6) with c * (t, ·) ∈ D(m), t ≥ 0, and every solution c of (1) with initial condition c 0 ∈ D(m) satisfies To our knowledge this result is new in the given generality. Particular self-similar solutions were obtained in [26,32] using various methods. For a probabilistic approach to self-similar fragmentation we refer the reader to [8] and the references therein. In the mathematical literature, the self-similar solutions and the asymptotic behaviour for fragmentation equation has been considered using deterministic analytic methods in [15] where the existence of self-similar solution is proved under the assumption that 1 0 z k h(z)dz < ∞ for some k < 1 and the convergence is proved when k ≤ α with additional regularity constraints. The proof of our result is based on the representation of solutions of (1) as densities of a Markov process {Y (t)} t≥0 ; see (12) and (14). We define another Markov process {X(t)} t≥0 corresponding to the growth-fragmentation equation (4) such that This is a piecewise deterministic Markov process with good asymptotic behaviour, as will be shown in Section 3 using results from [11] based on properties of stochastic semigroups. In Lemma 4 we give a relation between condition (7) and the first jump time of the process {X(t)} t≥0 . At the end of Section 3 we also define such processes in the case when α < 0. Asymptotic properties of growth-fragmentation type equations have been studied and improved in many works, see e.g. [25] and [10] for recent approaches, however none of these results allow us to provide the necessary and sufficient condition (7). The case where α = 0 is treated in [14] and [10]. If α > 0 then it is known [9] that the process {1/Y (t)} t≥0 is an example of a so-called self-similar Markov processes with increasing sample paths and it follows from [9, Theorem 1] that t 1/α Y (t) converges in distribution to a random variable Y ∞ , which is non-degenerate under condition (7). Our approach provides convergence of densities of (1 + t) 1/α Y (t), implying convergence in distribution and identifies the distribution of Y ∞ as being a stationary distribution of the process {X(t)} t≥0 , absolutely continuous with respect to m and with density u * . If (7) does not hold then Y ∞ is equal to 0. At the end of Section 4 we describe how to get this type of limit using our approach. Further study of asymptotic behaviour in this case is provided in [12]. Self-similar Markov processes were also used in [19] to get the large time behaviour of solutions of the fragmentation equation when α < 0.

Preliminaries
Let (E, E, m) be a σ-finite measure space and L 1 = L 1 (E, E, m) be the space of integrable functions. We denote by D(m) ⊂ L 1 the set of all densities on E, i.e.
and · is the norm in L 1 . A linear operator P : L 1 → L 1 such that P (D(m)) ⊆ D(m) is called stochastic or Markov [20]. It is called substochastic if P is a positive contraction, i.e., P f ≥ 0 and P f ≤ f for all f ∈ L 1 + . Let P : E × E → [0, 1] be a stochastic transition kernel, i.e., P(x, ·) is a probability measure for each x ∈ E and the function x → P(x, B) is measurable for each B ∈ E, and let P be a stochastic operator on L 1 . If for all B ∈ E, f ∈ D(m), then P is called the transition operator corresponding to P. Suppose that there exists a measurable function p : for m-a.e. x ∈ E and for every density f . If p can be chosen in such a way that then P is called partially integral and if p is such that E p(x, y) m(dy) > 0 m-a.e. x ∈ E then P is called pre-Harris. We can extend a stochastic operator P beyond the space L 1 in the following way. If 0 ≤ f n ≤ f n+1 , f n ∈ L 1 , n ∈ N, then the pointwise almost everywhere limit of f n exists and will be denoted by sup n f n . For f ≥ 0 we define P f = sup n P f n for f = sup n f n , f n ∈ L 1 + .
(Note that P f is independent of the particular approximating sequence f n and that P f may be infinite.) Moreover, if P is the transition operator corresponding to P then (9) holds for all measurable nonnegative f . A non- A nonnegative measurable f * is said to be locally integrable with respect to

Proposition 1 ([27, Corollary 3]).
Suppose that a stochastic operator P is pre-Harris and has no invariant density. If P has a subinvariant f * with f * > 0 a.e. and f * is locally integrable with respect to E * , then the operator P is sweeping with respect to E * .
We conclude this section with the notion of stochastic semigroups and asymptotic behaviour of such semigroups. A family of stochastic operators

Construction of Markov processes
In the first part of this section we construct two Markov processes such that (8) holds and their distributions are related to equations (1) and (4). The second part contains the proof of Theorem 1. At the end of this section we discuss the case of α < 0.
Let ε n , θ n , n ∈ N, be sequences of independent random variables, where the ε n are exponentially distributed with mean 1 and the θ n are identically distributed with distribution function H on (0, 1) of the form If Y 0 is a positive random variable independent of θ n , n ∈ N, then the sequence Y n = θ n Y n−1 , n ≥ 1, defines a discrete-time Markov process with stochastic transition kernel The transition operator P on L 1 corresponding to P is as in (5).
The process {Y (t)} t≥0 is a pure jump Markov process [28, Section 6.1] with the jump rate function a(x) = x α and the jump distribution P so that the process stays at x for a random time, which is called a holding time and has an exponential distribution with mean 1/a(x), and then it jumps according to the probability distribution P(x, ·), independently on how long it stays at x. Therefore we define the sample path of {Y (t)} t≥0 starting at where τ n are the jump times τ 0 = 0, τ n := σ n + τ n−1 , n ≥ 1, and σ n are the holding times defined by Note that (see e.g. [28, Section 6.1]) if the probability density function of where where c is the solution of equation (1) with initial condition c 0 . The sample path of {X(t)} t≥0 starting from X(0) = X 0 = x is defined as where t n are the jump times and X n = X(t n ) are the post-jump positions The process {X(t)} t≥0 , representing fragmentation with growth, is the minimal piecewise deterministic Markov process [28, Section 6.2] with characteristics (π, ϕ, P), where This is a particular example of a semiflow with jumps as studied in [11], where the jumps are defined by the mappings T θ (x) = θx and densities where {P (t)} t≥0 is a stochastic semigroup on L 1 and u(t, x) = P (t)c 0 (x) is the solution of (4) with initial condition u(0, x) = c 0 (x), see [28,Section 6.2].
This and (8) imply that the solution c of equation (1) with initial condition c 0 can be represented as Consequently, this reduces the proof of Theorem 1 to the study of asymptotic stability of the semigroup {P (t)} t≥0 .
For the proof of Theorem 1 we need the following result [11,Corollary 3.16], which is a refinement of [11, Theorem 1.1].
Theorem 2. Assume that the semigroup {P (t)} t≥0 is partially integral and that the chain (X(t n )) n≥0 defined in (16) has only one invariant probability measure µ * , absolutely continuous with respect to m. If the density f * = dµ * /dm is strictly positive a.e., then {P (t)} t≥0 is asymptotically stable if and only if where t 1 is the first jump time in (15) and E x denotes the expectation operator with respect to the distribution P x of the process starting at X(0) = x.
We first show that all assumptions of Theorem 2 are satisfied. We next prove in Lemma 4 that conditions (18) and (7) are equivalent. Lemma 1. For each t > 0 the operator P (t) is pre-Harris. In particular, the semigroup {P (t)} t≥0 is partially integral.
Proof. Since the semigroup {P (t)} t≥0 is stochastic, we have m(y : P y (t ∞ < ∞) > 0) = 0 by [28,Corollary 5.3], where t ∞ = lim n→∞ t n . Observe that if y is such that P y (t ∞ < ∞) = 0, then and X(t) = e t−tn X(t n ) for t ∈ [t n , t n+1 ), n ≥ 0. For n = 1 we have where ψ t (y) = e − t 0 ϕ(e r y)dr . The change of variables x = e t θy leads to for all f ∈ D(m) and all Borel measurable sets B, which implies that Observe that ∞ 0 p(x, y)m(dy) > 0 for m-a.e. x ∈ (0, ∞), which completes the proof.
We will use the following lemma. Its proof is straightforward.
Lemma 2. Assume that ξ and θ are independent random variables, where ξ has a probability density function f ξ on (0, ∞), while θ has a density f θ on (0, 1). Then the density f ξθ of the random variable ξθ is given by and it is positive a.e. if f ξ is positive a.e.
Equality in distribution will be denoted by d =.
Lemma 3. Let ε n , θ n , n ∈ N, be sequences of independent random variables, where the ε n are exponentially distributed with mean 1 and the θ n are identically distributed with distribution function H as in (10). Then the random variable is finite a.e. and it satisfies the distribution µ * of X ∞ is absolutely continuous with respect to m with strictly positive density f * , and it is the unique stationary distribution of the Markov chain (X n ) n≥0 defined in (16).
Since θ α j ∈ [0, 1], j ≥ 1, we see that the sequence n j=1 θ α j , being monotone, converges almost surely. In fact, it converges to zero, by the strong law of large numbers. It is easily seen that and the sequence ξ n converges almost surely to X α ∞ , where X ∞ is as in (19). Therefore, if the Markov chain (X n ) n≥0 has a stationary distribution then it has to be the distribution of X ∞ .
Let the random variable Z ∞ be defined by Note that in the right-hand side of (22), we take the product to be equal to 1 for k = 1. Since the random variables θ j , ε j are nonnegative, Z ∞ is a well defined random variable with values in [0, ∞]; in fact, it is finite almost surely [17, Theorem 2.1] for our choice of θ j , ε j . Since −∞ ≤ E(log θ 1 ) < 0 and E(log(max{ε 1 , 1}) < ∞, the series in (22) converges almost surely, by [29, Theorem 1.6], and The Markov chain (Z n ) n≥0 defined by Z n = θ α n Z n−1 + ε n , n ≥ 1, has a unique stationary distribution, by [29, Theorem 1.5], which is the distribution of Z ∞ . Now, observe that Moreover, the random variable X α ∞ + ε 0 has the same distribution as the random variable Z ∞ . Consequently, the distribution of X ∞ is the unique stationary distribution of the Markov chain (X n ) n≥0 .
Next observe that the distribution of Z ∞ , being a convolution of two distributions one of which is absolutely continuous with respect to the Lebesgue measure, is absolutely continuous. Hence, it has a probability density function f Z∞ . Since X ∞ d = θ 0 Z 1/α ∞ and the probability density function of Z 1/α ∞ is given by αx α−1 f Z∞ (x α ), the random variable X ∞ also has a probability density function f X∞ , which implies that f * (x)x = f X∞ (x) for x > 0. To show that f * is positive a.s. it is enough to show, by Lemma 2, that f Z∞ is positive a.e. Since Z ∞ which shows that f Z∞ is positive on an interval (x 0 , ∞), where x 0 ≥ 0. To complete the proof, it remains to show that x 0 = 0. From (22) it follows that f Z∞ (x) satisfies the following equation where g α is the probability density function of the random variable θ α 0 . By changing the order of integration, we obtain Suppose that x 0 > 0. Then for every x < x 0 and every y > x 0 we obtain This implies that for every r < 1 we have r 0 g α (t)dt = 0, which contradicts the fact that 1 0 g α (t)dt = 1 and shows that x 0 = 0. Consequently, the density f * is positive a.e.
Lemmas 1 and 3 imply that all assumptions of Theorem 2 hold. Theorem 1 now follows by combining Theorem 2 and the next lemma. Proof. We have t 1 = 1 α log where ε 1 and X 0 are independent. This leads to To calculate the first moment of t 1 we take X 0 (21) and E log ε 1 = −γ, where γ is the Euler-Mascheroni constant, we obtain that t 1 has a finite first moment if and only if |E log X ∞ | < ∞. We have Moreover, from (20) it follows that On the other hand, and only if |E log X ∞ | < ∞ and completes the proof.
We conclude this section with a construction of the jump Markov process {Y (t)} t≥0 corresponding to the fragmentation equation (1) with a(x) = x α and α < 0. The sample path of Y (t) starting at Y (0) = Y 0 is defined as in (12) as long as t ∈ [τ n , τ n+1 ) for some n. Since and −α > 0, we see, as in the proof of Lemma 3, that the limit exists and is finite a.s. Thus the explosion time of the process, being defined by τ ∞ = lim n→∞ τ n , is finite a.s. The sequence Y n , n ≥ 0, is non-increasing and converging to 0 a.s. Consequently, we can set Y (t) = 0 for t ≥ τ ∞ and say that Y −α 0 I ∞ is the first time when Y (t) reaches 0. Observe that we have where the corresponding piecewise deterministic Markov process {X(t)} t≥0 has characteristics (π, ϕ, P) with and the evolution equation as in [28,Section 6.3] or [1,Section 4]. Another representation of {Y (t)} t≥0 is as follows. Let {Z(t)} t≥0 be a compound Poisson process of the form where {N (t)} t≥0 is a Poisson process with jump timesτ n = n j=1 ε j , n ≥ 1. Then {Y (t)} t≥0 can be represented in the form [19] , t ≥ 0, where ρ is the time-change given by if and only if t < I ∞ , and ρ(t) = +∞ otherwise. The random variable I ∞ is an example of the so-called exponential functional (see e.g. [13]) This can be easily seen by noting that N (s) = k for t ∈ [τ k ,τ k+1 ) with τ 0 := 0, ε k+1 =τ k+1 −τ k , k ≥ 0, and To get finiteness of I ∞ in terms of pathwise properties of the process {Z(t)} t≥0 , one can simply assume that E(Z(1)) ∈ (0, ∞), which is equivalent to (7).

Examples and final remarks
We have proved that the homogeneous fragmentation equation has a selfsimilar solution if and only if E(log θ 1 ) > −∞ where θ 1 is a random variable with distribution function H as in (10). In that case the growth fragmentation equation has an integrable stationary solution. Here we give the formula for the stationary solution in terms of the probability density function of the random variable Z ∞ as defined in (22).
Using the notation of [11] note that the stochastic transition kernel K of the Markov chain (X n ) n≥0 defined in (16)  see e.g. [11,Theorem 3.14], where ϕ(x) = αx α and P is the stochastic transition kernel defined in (11). We have for all B ∈ B(0, ∞) and f ∈ D(m), where Since P in (5) is the transition operator corresponding to P, we obtain, by (23) and (9), Hence the stochastic operator K, being the transition operator on L 1 corresponding to K, is given by .
Note that f * in Lemma 3 is the unique invariant density of the stochastic operator K, thus  [11,Theorem 3.15]. Note that f * might not be integrable, but taking B = (0, ∞) and f = f * in (23) shows that From the proof of [11,Theorem 3.15] it follows that E(t 1 ) = f * . Thus f * = E(− log θ 1 ), by Lemma 4. On the other hand, f * (x)x is the probability density function of the ran- = θ 1 and the operator P corresponds to a multiplication by θ 1 . Thus the probability density function of Z Consequently, if E(− log θ 1 ) < ∞ then the semigroup {P (t)} t≥0 has a unique invariant density u * and it is given by We have proved the following. Proposition 2. If (7) holds, equivalently E(− log θ 1 ) < ∞, then the selfsimilar solution of equation (1) in Theorem 1 is of the form This form of the self-similar solution should be compared with the scaling assumption in [32, equation (15)], where the function Φ corresponds to f Z∞ .
We now give an exactly solvable example, known since the [26].
where Γ is the gamma function. This implies that Note that u * (x)x is the probability density function of the generalized gamma distribution with parameters (α, β, 1).
The random variable Z ∞ can be represented as the exponential functional of the compound Poisson process {Z(t)} t≥0 defined above. The Laplace exponent φ(q) of {Z(t)} t≥0 , which is defined by E(e −qZ(t) ) = e −tφ(q) , t > 0, is of the form Hence, [13,Proposition 3.3] implies that the random variable Z ∞ is determined by its moments , n = 1, 2, . . . .
We next give two examples where the random variable Z ∞ can be identified through its moments.
Example 2. Recall that a random variable θ has a beta distribution with parameters (a, b), a, b > 0, if its probability density function is If θ 1 is a product of two independent random variables with beta distributions with parameters (β 1 , 1) and (β 2 , 1), then E(− log θ 1 ) = 1/β 1 + 1/β 2 and Z ∞ is a product of two independent random variables, one is beta distributed with parameters (1+a 1 , a 2 ) and the other has a gamma distribution with shape parameter 1 + a 2 , where Remark 1. It is easily seen that if Z ∞ = θξ where θ and ξ are independent random variables, θ has a beta distribution with parameters (1+ a 1 , a) and ξ has a gamma distribution with shape parameter 1 + a 2 , then the probability density function of Z ∞ is of the form Example 3. As in [32] consider now the function h(z) = pβ 1 z β 1 −2 + (1 − p)β 2 z β 2 −2 , z ∈ (0, 1), where β 1 , β 2 > 0, p ∈ [0, 1]. We can assume that β 2 > β 1 and p > 0. Then p(a 2 − a 1 ) > 0, where a 1 , a 2 are as in (25), E(− log θ 1 ) = p/β 1 + (1 − p)/β 2 , and Z ∞ is the product of two independent random variables, one is beta distributed with parameters (1 + a 1 , p(a 2 − a 1 )) and the other has a gamma distribution with parameter 1 + a 2 . Remark 1 and Proposition 2 allow us to recover the scaling solutions from [32].
We conclude the paper by commenting on the behaviour of solutions of the fragmentation equation when E(log θ 1 ) = −∞. Proof. If E(− log θ 1 ) = ∞ then f * is not integrable and the semigroup {P (t)} t≥0 has no invariant density, since if there were one, then it would be a scalar multiple of f * , by [11,Corollaries 3.11,3.12]. Hence, for every s > 0 the operator P (s) does not have an invariant density, by [20, Proposition 7.12.1]. From Lemma 1 and Proposition 1 it follows that the operator P (s) is sweeping which together with [20,Theorem 7.11.1] implies that the semigroup {P (t)} t≥0 is sweeping from every set B satisfying B f * (x)m(dx) < ∞.