On the lifting of deterministic convergence rates for inverse problems with stochastic noise

Both for the theoretical and practical treatment of Inverse Problems, the modeling of the noise is a crucial part. One either models the measurement via a deterministic worst-case error assumption or assumes a certain stochastic behavior of the noise. Although some connections between both models are known, the communities develop rather independently. In this paper we seek to bridge the gap between the deterministic and the stochastic approach and show convergence and convergence rates for Inverse Problems with stochastic noise by lifting the theory established in the deterministic setting into the stochastic one. This opens the wide field of deterministic regularization methods for stochastic problems without having to do an individual stochastic analysis for each problem.

In Inverse Problems, the model of the inevitable data noise is of utmost importance. In most cases, an additive noise model is assumed. In (1), y ∈ Y is the true data of the unknown x ∈ X under the action of the (in general) nonlinear operator F : X → Y, and in (1) corresponds to the noise. The spaces X , Y are assumed to be Banach-or Hilbert spaces. When speaking of Inverse Problems, we assume that (2) is ill-posed. In particular this means that solving (2) for x with noisy data (1) is unstable in the sense that "small" errors in the data may lead to arbitrarily large errors in the solution. Hence, (1) is not a sufficient description of the noise. More information is needed in order to compute solutions from the data in a stable way. In the deterministic setting, one assumes for some δ > 0 where d Y (·, ·) is an appropriate distance functional. Typically, d Y is induced by a norm such that (3) reads ||y − y δ || ≤ δ. Here and further on we use the superscript · δ to indicate the deterministic setting. Solutions of (2) under the assumption (1), (3) are often computed via a Tikhonov-type variational approach where again d Y is a distance function and Φ(·) is the penalty term used to stabilize the problem and to incorporate a-priori knowledge into the solution.
The regularization parameter α is used to balance between data misfit and the penalty and has to be chosen appropriately. The literature in the deterministic setting is rich, at this point we only refer to the monographs [30,12,23] for an overview. The deterministic worst-case error stands in contrast to stochastic noise models where a certain distribution of the noise in (1) is assumed. We shall indicate the stochastic setting by the superscript · η . In this paper, η will be the parameter controlling the variance of the noise. Depending on the actual distribution of , d Y (y, y η ) may be arbitrarily large, but with low probability. A very popular approach to find a solution of (2) is the Bayesian method. For more detailed information, we refer to [25,33,31,34,7]. In the Bayesian setting, the solution of the Inverse Problem is given as a distribution of the random variable of interest, the posterior distribution π post , determined by Bayes formula π post (x|y η ) = π (y η |x)π pr (x) π y η (y η ) .
That is, roughly spoken, all values x are assigned a probability of being a solution to (2) given the noisy data y η . In (5), the likelihood function π (y η |x) represents the model for the measurement noise whereas the prior distribution π pr (x) represents a-priori information about the unknown. The data distribution π y η (y η ) as well as the normalization constants are usually neglected since they only influence the normalization of the posterior distribution. In practice one is often more interested in finding a single representation as solution instead of the distribution itself. Popular point estimates are the conditional expectation (conditional mean, CM) E(π post (x|y η )) = xπ post (x|y η )dx (6) and the maximum a-posteriori (MAP) solution x MAP = argmax x π post (x|y η ), i.e., the most likely value for x under the prior distribution given the data y η . Both point estimators are widely used. The computation of the CMsolution is often slow since it requires repeated sampling of stochastic quantities and the evaluation of high-dimensional integrals. The MAP-solution, however, essentially leads to a Tikhonov-type problem. Namely, assuming π (y η |x) ∝ exp(−d Y (F (x), y η )) and π pr (x) ∝ exp(−αΦ(x)), one has analogously to (4). Also non-Bayesian approaches for Inverse Problems often seek to minimize a functional (4), see e.g. [24,2] or use techniques known from deterministic theory such as filter methods [5,3]. Finally, Inverse Problems appear in the context of statistics. Hence, the statistics community has developed methods to solve (2), partly again based on the minimization of (4). We refer to [13] for an overview.
In summary, Tikhonov-type functionals (4) and other deterministic methods frequently appear also in the stochastic setting. From a practical point of view, one would expect to be able to use deterministic regularization methods for (2) even when the noise is stochastic. Indeed, the main question for the actual computation of the solution, given a particular sample of noisy data y η , is the choice of the regularization parameter. A second question, mostly coming from the deterministic point of view, is the one of convergence of the solutions when the noise approaches zero. In the stochastic setting these questions are answered often by a full stochastic analysis of the problem. In this paper we present a framework that allows to find appropriate regularization parameters, prove convergence of regularization methods and find convergence rates for Inverse Problems with a stochastic noise model by directly using existing results from the deterministic theory.
The paper takes several ideas from the dissertation [20], which is only publicly available as book [21] and not published elsewhere. It is organized as follows. In Section 1 we discuss an issue occurring in the transition from deterministic to stochastic noise for infinite dimensional problems. The Ky Fan metric, which will be the main ingredient of our analysis, and its relation to the expectation will be introduced in Section 2. We present our framework to lift convergence results from the deterministic setting into the stochastic setting in Section 3. Examples for the lifting strategy are given in Section 4.

On the noise model
Before addressing the convergence theory, we would like to discuss stochastic noise modeling and its intrinsic conflict with the deterministic model. Here and throughout the rest of the paper, assume (Ω, F, P) to be a complete probability space with a set Ω of outcomes of the stochastic event, F the corresponding σ-algebra and P a probability measure, P : (Ω, F) → [0, 1]. We restrict ourselves here to probability measures for the sake of simplicity. Extensions to more general measures are straightforward. In the Hilbert-space setting, the noise is typically modeled as follows, see for example [29,30,3,27]. Let ξ : Ω → Y be a stochastic process. Then for y ∈ Y y, ξ defines a real-valued random variable. Assuming that for allỹ ∈ Y and that this expectation is continuous inỹ, defines a continuous, symmetric nonlinear bilinearform. In particular, there exists the covariance operator For the stochastic analysis of infinite dimensional problems via deterministic results, (9) is problematic. Namely, if {u n } n∈N is an orthonormal basis in Y, the set { u n , ξ } n∈N consists of infinitely many identically distributed random variables with 0 < E| u n , ξ | 2 = const < ∞ [30]. Thus is almost surely infinite and a realization of the noise is an element of the Hilbert space Y with probability zero. Let us take the common example of Gaussian white noise which can be modeled via the above construction. Namely, with E( y, ξ ) = 0 ∀y ∈ Y and the covariance operator where I is the identity and η > 0 the variance parameter, the Gaussian white noise is described [30,27]. As consequence of (11) and explained for example in [27], a realization of such a Gaussian random variable is with probability zero an element of an infinite dimensional L 2 -space. It is therefore inappropriate to use an L 2 -norm for the residual in case of an infinite dimensional problem. Since in this case a realization of Gaussian white noise only lies (almost surely) in any Sobolev space H s with s < −d/2 where d is the dimension of the domain, one should adjust the norm for the residual accordingly. Except for the paper [27] this issue seems not to have been addressed in the literature. A main reason for this might be that for the practical solution of the Inverse Problem this is not a severe issue since in reality the measurements are finite dimensional and, in order to use a computer to solve the problem, a finite dimensional approximation of the unknown object has to be used. In this case the sum in (11) is finite and the noise lies almost surely in the finite dimensional space. However, difficulties arise whenever one seeks to investigate convergence of the discretized problem to its underlying infinite dimensional problem. We will not address this issue and assume throughout the whole work that E|| || < ∞ or use the slightly weaker bound on the Ky-Fan metric (see Section 2). In order to handle the Ky Fan metric we need to be able to evaluate probabilities P(||y − y η || > ε), Assuming that Y is finite dimensional, then this is clear. For infinite dimensional problems, however, we have to assume that the noise is smooth enough for the sum in (11) to converge. Examples for this are Brownian noise (1/f 2 -noise) or pink noise (1/f -noise), see e.g. [14,26]. At this point we would also like to mention that as a consequence of our rather generic noise model we might not make use of some specific properties of the noise as would be possible when focusing on a particular distribution of the noise. However, we are able to show convergence for a large variety of regularization methods.

The Ky Fan metric
In order to measure the magnitude of the stochastic noise and the quality of the reconstructions, we need metrics that incorporate the stochastic nature of the problem. One such metric, which will be the the main tool for our stochastic convergence analysis, is the Ky Fan metric (cf. [28]). It is defined as follows.
Definition 2.1. Let X 1 and X 2 be random variables in a probability space (Ω, F, P) with values in a metric space (χ, d χ ). The distance between X 1 and X 2 in the Ky Fan metric is defined as We will often drop the explicit reference to ω. This metric essentially allows to lift results from a metric space to the space of random variables as the connection to the deterministic setting is inherent via the metric d χ used in its definition. The deterministic metric is often induced by a norm || · ||. We will implicitly assume that equation (2) is scaled appropriately since ρ K (X 1 , X 2 ) ≤ 1 ∀X 1 , X 2 by definition. Note that one can use definition (12) also if d X is a more general distance function than a metric. Then the construction (12) itself is no longer a metric, however, the techniques used in later parts of the paper can readily be expanded to this setting.
An immediate consequence of (12) is that ρ K (X 1 , X 2 ) = 0 if and only if X 1 = X 2 almost surely. Convergence in the Ky Fan metric is equivalent to convergence in probability, i.e., for a sequence {X k } k∈N ∈ X and X ∈ X one has Hence convergence in the Ky Fan metric also leads to pointwise (almost sure) convergence of certain subsequences in the metric d χ [10]. A somewhat more intuitive and more frequently used stochastic metric is the expectation, or more general, a (stochastic) L p metric. For random variables Y 1 and Y 2 with values in a metric space (χ, d Y ), defines the p-th moment of d Y (Y 1 , Y 2 ) for p ≥ 1, assuming the existence of the integral. We will use p = 1 and refer to it as convergence in expectation. Note that since the variance is defined as We will show later that for parameter choice rules the expectation of the noise has to be slightly overestimated, hence estimating E(d Y (y, y η )) via the popular and often easier to compute (13) is not problematic. While the main part of our analysis is based on the description of the noise and the reconstruction quality in the Ky Fan metric, we will also allow the expectation as measure of the stochastic noise and partially show convergence of the reconstructed solutions in expectation. To this end, we comment in the following on the connection between those two metrics.
It is well-known that convergence in expectation implies convergence in probability, see for example [10]. Hence, convergence in the Ky Fan metric is implied by convergence in expectation (and also by convergence of higher moments). Namely, with Markovs inequality one has, for an arbitrary nonnegative random variable X with E(X) < ∞ and C > 0 Under an additional assumption one can show conversely that convergence in probability implies convergence in expectation. We have the following definition. ). Let {x k } k∈N ⊂ L 1 (P) be a sequence convergent almost everywhere (or in probability) to a function x. If the sequence {x k } k∈N is uniformly integrable, then it converges to x in the norm of L 1 (P).
From a practical point of view, uniform integrability of a sequence of regularized solutions to an Inverse Problem is a rather natural condition. Since Inverse Problems typically arise from some real-world application, it is to be expected that the true solution is bounded. For example, in Computer Tomography, the density of the tissue inside the body cannot be arbitrarily high. Although for an Inverse Problem with a stochastic noise model, boundedness of the regularized solutions can not be guaranteed due to the possibly huge measurement error, one can enforce the condition from a priori knowledge of the solution.
Assume that the true solution x † fulfills ||x † || ≤ and |x † | ≤ C globally for some fixed , C > 0. Under this assumption, let {x η(k) k } k∈N be a sequence of regularized solution for noisy data with variance η(k) Then the sequence {x η k } k∈N is uniformly integrable. In other words, by discarding solutions that must be far away from the true solution in regard of a priori knowledge, convergence in the Ky Fan metric implies convergence in expectation.
To close this section, let us remark on the computation of the Ky Fan distance. It can be estimated via the moments of the noise.
Proof. One has, due to Markov's inequality (14) and the monotonicity of the mapping z → z s for z ≥ 0, for C yields the claim.
Note that even if moments exist for all s ∈ N see [20,15], due to the tail of the distributions. In the Gaussian case, a direct estimate has been derived in [32,22]. We present it in Proposition 3.6.
3 Convergence in the stochastic setting

Deterministic Inverse Problems with stochastic noise
As mentioned previously, the intention of this paper is to show convergence for Inverse Problems under a stochastic noise model using results from the deterministic setting. Assume we have at hand a deterministic regularization method of our liking for the solution of (2) under the noisy data (1) where now d Y (y, y δ ) ≤ δ for some δ > 0. By regularization method we understand (possibly nonlinear) mappings where x η α = R α (y η ) is the regularized solution to the regularization parameter α given the data y η . Often, x δ α is obtained via the minimization of functionals of the type (4). In order to deserve the name regularization we require R α to fulfill lim under a certain choice of the regularization parameter α chosen either a priori α = α(δ) or a posteriori α = α(δ, y δ ). In our notation x † is the true solution, usually the minimum norm solution with respect to the penalty Φ in (4), i.e., Note that, in particular for nonlinear problems, x † does not need to be unique. In [20,21] it was pointed out that this is problematic for the lifting arguments. A standard argument in the deterministic theory is to prove convergence of subsequences to the desired solution, and then deduces convergence of the whole series of regularized solutions, if possible. In the stochastic setting, this is not possible in general since subsequences for different ω do not have to be related. A constructed example for this behavior can be found in Section 4.1. of [20,21]. In order to lift general deterministic regularization methods into the stochastic setting we must therefore require that x † is unique. We formulate our convergence results assuming the noise to be bounded with respect to the Ky fan metric or in expectation. As we will see, in the latter case we have to "inflate" the expectation for decreasing variance η in order to obtain convergence. For the analysis we mainly use a lifting argument using deterministic theory. In [20,21,Theorem 4.1], it was proved how by means of the Ky Fan metric deterministic results can be lifted to the space of random variables for nonlinear Tikhonov regularization. Since the theorem is based solely on the fact that there is a deterministic regularization theory and that the probability space Ω can be decomposed into a part where the deterministic theory holds and a small part where it does not, it is easily generalized. Before we state the Theorem, we need the following Lemmata.
Lemma 3.1. ( [11], see also [10]) Let (Ω, F, P) be a complete probability space. Let x k and x be measurable functions from Ω into a metric space χ with metric 20,21], Proposition 1.10). Let {x k } k∈N be a sequence of random variables that converges to x in the Ky Fan metric. Then for any ν > 0 and ε > 0 there existΩ ⊂ Ω, P(Ω) ≥ 1 − ε, and a subsequence x kj with Furthermore there exists a subsequence that converges to x almost surely.
Proof. We give a sketch of the proof for the first statement taken from [20,21].
. By definition of the Ky Fan metric (12), for given which proves he first statement. The second one follows since convergence in Ky Fan metric is equivalent to convergence in probability, which itself implies almost-sure convergence of a subsequence, cf [10].
With this, we are ready for the convergence theorem which we shall split in two parts, one for the Ky Fan metric as error measure and one for the expectation.
Theorem 3.3. Let R α be a regularization method for the solution of (2) in the deterministic setting under a suitable choice of the regularization parameter. Let now y η = y + (η) where (η) is a stochastic error such that ρ K (y, y η ) → 0 as η → 0. Then, assuming (2) has a unique solution x † and all necessary assumptions for the deterministic theory (except the bound on the noise) hold with probability one, the regularization method R α fulfills lim η→0 ρ K (x † , R α (y η )) = 0 under the same parameter choice rule as in the deterministic setting with δ replaced by ρ K (y, y η ). If the regularized solutions are defined by (15), then it holds that lim ). (Note that 0 ≤ θ ≤ 1 due to the properties of the Ky Fan metric). We show in the following that for arbitrary ε > 0 we have θ/2 ≤ ε and hence lim sup As a first step we pick a "worst case" subsequence {y η k j } of {y η k }, a subsequence for which the corresponding solutions satisfy ρ K (x † , x α(η k j ) ) ≥ θ/2. We now show that even from this "worst case" sequence we can pick a subsequence {y Let ε > 0. According to Lemma 3.2 we can pick a subsequence {y , ν > 0 arbitrarily small, onΩ. For all ω ∈Ω, the noise tends to zero. We can therefore use the deterministic result with δ = ρ K (y(ω), y η k j l ) and deduce that x α(η The convergence is not uniform in ω; nevertheless, pointwise convergence implies uniform convergence except on sets of small measure according to Lemma 3.1. Therefore for η k j l sufficiently small. This ε is by definition of the Ky Fan metric an upper bound for the distance between x α(η k j l ) and x † . Therefore we have lim sup On the other hand, the original sequence satisfied lim inf ) , x † ) it follows θ/2 ≤ ε. Because ε > 0 was arbitrary, this implies θ = 0, which concludes the proof of convergence in the Ky Fan metric. Convergence in expectation follows from Theorem 2.1 noting that by (15) the sequence of regularized solutions is uniformly integrable.
Theorem 3.4. Let R α be a regularization method for the solution of (2) in the deterministic setting under a suitable choice of the regularization parameter. Let now y η = y + (η) where (η) is a stochastic error such that E(d Y (y, y η )) → 0 as η → 0. Then, assuming (2) has a unique solution x † and all necessary assumptions for the deterministic theory (except the bound on the noise) hold with probability one, the regularization method R α fulfills under the same parameter choice rule as in the deterministic setting with δ replaced by τ If the regularized solutions are defined by (15) then it holds that Proof. As previously we pick a "worst case" subsequence {y η k j } of {y η k }, a subsequence for which the corresponding solutions satisfy Let ε > 0. We can now pick a subsequence which we again denote by {y This again defines, via the complement in Ω,Ω with P(Ω) ). As before, we can now apply the deterministic theory by substituting δ with τ (η k j l )E(d Y (y, y η k j l )). The remainder of the proof is identical to the one of Theorem 3.4.
The theorems justify the use of deterministic algorithms under a stochastic noise model. Since the proof is solely based on relating the stochastic noise to a deterministic one on subsets of Ω and does not use any specific properties of the regularization methods or the underlying spaces, it opens most of the deterministic methods for the a stochastic noise model. In particular, the parameter choice rules from the deterministic setting are easily adapted.
As usual in deterministic literature, the general convergence theorem is followed by convergence rates which are obtained under additional assumptions. Often these conditions ensure at least local uniqueness of the true solution. If not, we have to require such a property for the same reason as previously.
Theorem 3.5. Let R α be a regularization method for the solution of (2) in the deterministic setting such that, under a set of assumptions on the operator F and the solutions x † and a suitable choice of the regularization parameter, with a monotonically increasing right-continuous function ϕ.
Let now y η = y + (η) where (η) is a stochastic error such that as η → 0. Then, assuming all necessary assumptions for the deterministic theory (except the bound on the noise) hold with probability one and that there is (either by the deterministic conditions or by additional assumption) a (locally) unique solution x † to (2), the regularization method R α fulfills in case a) or, respectively, in case b), under the same parameter choice rule as in the deterministic setting with δ replaced by ρ K (y, y η ) (case a)) or τ (η)E(d Y (y, y η )) where τ (η) fulfills (19) (case b)).
If the expectation is used as measure for the data error, we have by Markovs inequality. Hence, with probability 1 − 1 τ (η) we are in the deterministic setting with δ = τ (η)E(d Y (y, y η )) and The convergence rate follows by the definition of the Ky Fan metric.
Hence in this case the convergence rates are preserved in the Ky Fan metric. For the expectation this is not the case. We have to gradually inflate the expectation by the parameter τ in order to obtain convergence (and rates). Let us discuss the simple example of Gaussian noise in the finite dimensional setting, i.e. from (1) consists of m ∈ N i.i.d. random variables i ∼ N (0, η 2 I m ) with zero mean and variance η 2 . Then it has been shown in [15] that for any τ > 1 with the gamma functions Γ(·) and Γ(·, ·) defined as In particular, (20) is independent of the variance η 2 . In order to to decrease the probability to zero, we therefore have to link τ with the variance. For Gaussian noise of the above kind the following estimate for the Ky Fan distance between true and noisy data has been given in [32].
Proposition 3.6. Let ξ be a random variable with values in R m . Assume that the distribution of is N (0, η 2 I m ) with σ > 0. Then it holds in (R m , || · || 2 ) that Recall that see e.g. [15]. Comparing (21) and (22), one sees that E(|| || 2 ) < ρ K ( , 0) and in particular the decay of ρ K ( , 0) slows down with decreasing η. In other words, the artificial inflation we had to impose on the expectation is automatically included in the Ky Fan distance which we suppose is the reason why the convergence theory carries over in such a direct fashion for the Ky Fan metric. For many nonlinear Inverse Problems the requirement of a unique solution is too strong. Often one has several solutions of the same quality, in particular there exists more than one minimum norm solution. In this case, Theorem 3.3 is not applicable. In the example [20, 21, Example 4.3 and 4.5] with two minimum norm solutions the noise was constructed such that, while the error in the data converges to zero, for each fixed ω ∈ Ω the regularized solutions jump between both solutions such that no converging subsequence can be found. The main problem there is that the Ky Fan distance cannot incorporate the concept that all minimum norm solutions are equally acceptable. We will now define a pseudo metric that resolves this issue.
Definition 3.1. Let (X , d X ) be a metric space. Denote with L the set of minimum-norm solutions to (2). Then measures the distance between an element x ∈ X and the set L, in particular it is ρ L K (x) = 0 ⇔ x ∈ L almost surely. With this, one can define a pseudometric on (Ω, F, P) via Obviously (24) is positive, symmetric and fulfills the triangle inequality. However, ρ L K (x 1 , x 2 ) = 0 does not imply x 1 = x 2 a.e. but instead x 1 ∧ x 2 ∈ L which fixes the aforementioned issue of the Ky Fan metric and allows the following theorems.
Theorem 3.7. Let R α be a regularization method for the solution of (2) in the deterministic setting under a suitable choice of the regularization parameter. Let now y η = y + (η) where (η) is a stochastic error such that as η → 0. Then, assuming all necessary assumptions for the deterministic theory (except the bound on the noise) hold with probability one, the regularization method R α fulfills lim η→0 ρ L K (R α (y η )) = 0 under the same parameter choice rule as in the deterministic setting with δ replaced by ρ K (y, y η ) (case a)) or τ (η)E(d Y (y, y η )) where τ (η) fulfills (19) (case b)). In particular, the series of regularized solutions fulfills lim η1,η2→0 ρ L K (R α (y η1 ), R α (y η2 )) = 0 Proof. The proof follows the lines of the one of Theorem 3.3 with ρ K (·, x † ) replaced by ρ L K (·). Also Lemma 3.1 is easily adjusted to incorporate multiple solutions.
So far we assumed that only the noise is stochastic whereas the operator F and the unknown x were assumed to be deterministic. In [20,21] general stochastic Inverse Problems F (x(ω), ω) = y(ω) were considered. It was shown how deterministic conditions such as source conditions can be incorporated into the stochastic setting by assuming that the deterministic conditions hold with a certain probability. However, additional conditions may occur when lifting these in order to ensure the deterministic requirements up to a certain probability. Since this is easier seen given an example, we move the discussion of the complete stochastic formulation in the next section. Although we will address only one particular example, the technique can be applied to general approaches.

Fully stochastic Inverse Problems
Due to the possible multiplicity of stochastic conditions which might appear in this context it seems not possible to develop a lifting strategy in such a general fashion as in the previous section. We will therefore consider two classical examples, namely nonlinear Tikhonov regularization and Landweber's method for nonlinear Inverse Problems. The theory is taken completely from [20,21].

Nonlinear Tikhonov Regularization
We seek the solution of a nonlinear ill-posed problem (2) via the variational problem x δ α = argmin||F (x) − y δ || 2 + α||x − x * || 2 with a reference point x * ∈ X and given noisy data y η according to (1) where the stochastic distribution of the noise is assumed to be known. We shall skip the general convergence theorem (which follows as in the previous section) and move to convergence rates directly. In the deterministic theory, i.e. when y δ is the noisy data with ||y − y δ || ≤ δ, we have the following theorem from [12].
Theorem 3.8. Let D(F) be convex, y δ ∈ Y such that ||y − y δ || ≤ δ and x † denote the x * -minimum norm solution of (2). Furthermore let the following conditions hold.
Then for the choice α = cδ with some fixed c > 0 we obtain As given in Theorem 4.6 of [20], the following stochastic formulation of Theorem 3.8 holds.
Theorem 3.9. Let D(F) be convex, let y η be such that 0 ≤ ρ K (y, y η ) < ∞ and x † denote the x * -minimum norm solution of (2) for almost all ω. Furthermore let the following conditions hold.
Then for the choice α ∼ ρ K (y, y η ) we obtain Proof. We have ||y − y η || ≤ ρ K (y, y η ) with probability 1 − ρ k (y, y η ). Now fix ξ < 1 and 0 < τ < ∞. Then with probability 1 − (ϕ cl (ξ) + ϕ de (τ )) conditions d) and e) are fulfilled. Thus for the corresponding values of ω we can apply Theorem 3.8 and obtain This estimate holds on a set with probability greater or equal 1 − (ρ K (y, y η ) + ϕ cl (ξ) + ϕ de (τ )). The Ky Fan distance can therefore be bounded as This estimate is valid for arbitrary choices of ξ and τ above, therefore we may bound the Ky fan distance of x † and x η α by taking the infimum with respect to ξ and τ .
The core principle of the lifting strategy is to ensure that there exists a subsetΩ ⊂ (Ω) such that all deterministic assumptions hold with probability one onΩ. This may lead to the introduction of new conditions such as the decay condition in Theorem 3.9. Namely, since γ(ω) and ||v(ω)|| may vary with ω, it may be possible that for a sequence {ω k } k∈N γ(ω k ) → 0 and ||v(ω k )|| → ∞ such that still for all k ∈ N γ(ω(k))||v(ω k )|| < 1. In this case the parameter τ cannot be treated as a constant in the convergence rate, but it influences it to a significant degree. The decay condition had to be imposed in order to control the growth of τ . It is, however, possible to avoid condition e) by imposing other conditions. For example, one could require that γ(ω) is bounded below by some 0 < c < 1. In this case condition d) implies e). A more detailed discussion is given in [20]. Accordingly, in order to lift other deterministic convergence rate results into the fully stochastic setting, a careful examination of the conditions necessary for convergence in the stochastic setting, understanding their cross-connections and dependencies is important. However, once the conditions have been translated to the stochastic setting, convergence rates follow immediately using the Ky Fan metric. We will close this example by showing how particular choices of the stochastic parameters in Theorem 3.9 influence the convergence rate. To this end, we cite Remark 4.8 of [20].
First consider the case that ||v|| ∈ U [0, 1], i.e., it is uniformly distributed on the interval [0, 1]. We therefore have ϕ cl (ξ) = 1 − ξ, as well as ϕ de = 0 for τ > 1. Thus Theorem 3.9 implies For the second case suppose that ϕ de (τ ) = cτ −e for some exponent e > 0. Since now we do not have ϕ cl (ξ) → 0, but ϕ cl ≥ c > 0 we obtain Since the right hand side does not converge to zero we do not obtain a convergence rate anymore. However, convergence itself still follows from Theorem 3.3. Finally, consider the case when both d) and e) from Theorem 3.9 influence the convergence behavior, because F is stochastic with varying γ(ω). For instance in the case the for some ω ∈ U [0, 1] we have x † (ω) = ωx † and γ(ω) = 1 − ω, we find that ϕ cl (ξ) = 1 − ξ and ϕ de (τ ) = c/(1 + τ ) are compatible realizations of ϕ cl (·) and ϕ de (·). With this one can show under the parameter choice α ∼ ρ K (y, y η ) 5/4 . From the given examples it is evident that the convergence speed is heavily influenced by the conditions d) and e) in Theorem 3.9. Therefore, although the general formula for the convergence rate (26) may suggest that the convergence rate is close to the deterministic one, it may be significantly slower due to the additional stochastic properties.

Nonlinear Landweber iteration
As before we seek the solution of a nonlinear ill-posed problem (2) given noisy data y η according to (1) where the stochastic distribution of the noise is assumed to be known. Landweber's method can be seen as a descent algorithm for ||F (x) − y δ || 2 and is defined via the iteration where γ > 0 is an appropriately chosen stepsize and x δ 0 an initial guess. Landweber's method constitutes a regularization method if it is stopped early enough [12]. In the deterministic theory, i.e. when y δ is the noisy data with ||y−y δ || ≤ δ, we have the following theorem from [12] for convergence rates of the Landweber method.
Let ||v|| be sufficiently small. Then, if the regularization parameter is stopped according to the discrepancy principle, i.e., at the unique index k * for which for the first time We can obtain a stochastic version of Theorem 3.10 in the same way and with the same techniques used to show that Theorem 3.9 followed from Theorem 3.8.
Theorem 3.11. Let D(F) be convex, y η ∈ Y be given with ρ K (y, y η ) and let x † (ω) denote the x * -minimum norm solution of (2). Assume (2) has a solution in B ϑ (x * (ω)) for almost all ω. Furthermore let the following conditions hold on B 2ϑ (x * ).
where for almost all ω the set {R x,ω : x ∈ B ϑ (x * )} describes a family of bounded linear operators with for some v(ω) ∈ Y and 0 < ν ≤ 1 2 . c) P(ω ∈ Ω : C(ω)||v(ω)|| > c} < ϕ cl (c) Then, if the regularization parameter is stopped according to the discrepancy principle, i.e., at the unique index k * for which for the first time withτ > 2, we obtain for c 0 > 0 sufficiently small the rate where the constantc depends on ν only.
In the fully stochastic setting, the source condition b) from Theorem 3.11 need not hold with constant exponent ν for all ω ∈ Ω. There are at least two situations which lead to the power ν being a stochastic quantity as well, i.e., it holds with 0 < ν(ω) ≤ 1 2 . In the first case all solutions x † (ω) come from some initial element v(ω) = v ∈ Y, with small Y-norm. Some randomly smoothing operator is acting on this element and generates x † (ω). (One could for instance think of some kind of evolution process, e.g., a diffusion process that is applied to some initial value v). The smoothness of x † (ω) is therefore random.
Secondly, x † may be a deterministic element satisfying a certain smoothness condition. The data y(ω) is generated by applying a forward operator F (·, ω) with random smoothness properties. If the realization of F (·, ω) is strongly smoothing, this corresponds to a source condition with small ν(ω), if F (·, ω) is weakly smoothing we have the source condition with larger ν(ω).
Proof. As can be seen from the proof of Theorem 3.1 in [19], the requirement "||v|| sufficiently small", becomes stronger, the larger ν is. Supposing that ||v|| in (29) is sufficiently small for the case ν = 1 2 , implies therefore that also the convergence conditions for ν ≤ 1 2 are satisfied. Secondly we observe that the convergence rate in Theorem 3.11 contains a constantc that depends on ν. Although it is difficult to state an explicit formula forc, investigation of [19] shows, thatc(ν) attains its maximum value when ν = 1 2 . After these observations we start with the actual derivation of the convergence rate. For the sake of simplicity we assume that all appearing constants are just equal to 1. Furthermore we may assume that ϕ cl (·) and ϕ de (·) both vanish. Asymptotically, for given ω we therefore have the estimate Measuring the distance in the Ky Fan metric we must, since we assumed that ν(ω) is as in (29), solve the equation for ν. We first consider the simplified equation which is solved byν (ρ K (y, y δ )) = W (− log ρ K (y, y δ )) −2 log ρ K (y, y δ ) .
In the following we show that the above approximate solution is sufficiently accurate. Therefore we construct a better estimate via the ansatz ν(ρ K (y, y δ )) = ν(ρ K (y, y δ ))(1 + ε(ν(ρ K (y, y δ ))). The original equation then contains the term 2ν + 3νε + 1. Neglecting the quadratic part, we can replace this term with 2ν + 1, and obtain an equation that MATHEMATICA can solve for ε(ν). The solution for the correction term is given as and tends to zero approximately linearly inν. Thus this correction becomes small rather quickly, and we can consider the asymptotic bound in (30) as sufficiently accurate due to the asymptotics of the Lambert W-function.

Examples
We will now apply the theory developed in the previous section to selected deterministic regularization methods.

Filter-based regularization methods
Let A be a linear compact operator between Hilbert spaces X and Y with singular system {σ n , u n , v n } n∈N , see e.g. [12]. Then, for y ∈ D(A), the generalized inverse A † to A is given by Since for compact operators the singular values approach zero, their inverse blows up and the generalized inverse yields a meaningless solution to (2) for noisy data. A popular class of regularization methods is based on the filtering of the generalized inverse. Introducing an appropriate filter function F α (σ) depending on the regularization parameter α that controls the growth of σ −1 , the regularized solutions are defined by Examples for filter based methods are for example the classical Tikhonov regularization, truncated singular value decomposition or Landwebers method [30,12]. The regularization properties are fully determined by the filter functions.
In the deterministic setting, the conditions can be found in, e.g., [30,Theorem 3.3.3.]. Convergence rates can be obtained for a priori and a posteriori parameter choice rules under stricter conditions on the filter functions. We will only comment on an a priori choice here in order to keep this section short. An example of the discrepancy principle as a posteriori parameter choice is given in the next section in a different context. Using the smoothness condition the following theorem can be obtained. Assume that it holds ||x † || ν ≤ and for 0 ≤ ν ≤ ν * , where β > 0 and c, c ν * are constants independent of δ. Then with the a priori parameter choice the method induced by the filter F α is order optimal for all 0 ≤ ν ≤ ν * , i.e., for some constant c independent of δ and . Now we use Theorem 3.5 and obtain convergence rates in the Ky Fan metric.
Theorem 4.2. Let y ∈ range(A) and ρ K (y, y η ) be known. Assume that it holds ||x † || ν ≤ and for 0 ≤ ν ≤ ν * , (35) and (36) hold. Then with the a priori parameter choice the method induced by the filter F α fulfills for some constant c independent of δ and .
More about filter methods in the stochastic setting including numerical examples can be found in [15].

Sparsity-regularization for an autoconvolution problem
We consider an autoconvolution equation between Hilbert spaces X = L 2 [0, 1] and Y = L 2 [0, 1] where x ∈ D(F ). Such an equation is of great interest in, for example, stochastics or spectroscopy and has been analyzed in detail in [18]. Recently, a more complicated autoconvolution problem has emerged from a novel method to characterize ultra-short laser pulses [16,4]. Here, we want to show the transition from the deterministic setting to the stochastic setting in a numerical example. We base our results on the deterministic paper [1]. Using the Haar-wavelet basis, the authors of [1] reformulate (39) as an equation from 2 to 2 by switching to the space of coefficients in the Haar basis. In order to stabilize the inversion, an 1 penalty term is used such that the task is to minimize the functional The regularization parameter α in (40) is chosen according to the discrepancy principle. In [1], the following formulation is used: For 1 < τ 1 ≤ τ 2 choose α = α(δ, y δ ) such that holds. The authors show that this leads to a convergence of the regularized solutions against a solution of (39) with minimal 1 -norm of its coefficients. It was also shown that the regularization parameter fulfills By courtesy of Stephan Anzengruber we were allowed to use the original code for the numerical simulation in [1]. We only changed the parts directly connected to the data noise. Namely, we replaced the deterministic error ||y − y δ || 2 ≤ δ with i.i.d Gaussian noise, y η = y + , ∼ N (0, η 2 I). The discretization is due to the truncation of the expansion of the functions in the Haar-basis after m elements. The parameter choice (41) was realized with δ replaced by τ (η)E(|| || 2 ) in accordance with Theorem 3.3. Instead of the correct expectation , see [15], we used the upper bound since, as shown in this chapter, the expectation has to be "blown up" anyway. In a first experiment we let τ (η) = 1.3 = const. In this case, the numerical results suggest that the regularization parameter decreases too fast, i.e., (τ (η)E(|| ||2)) 2 α does not converge to zero as the requirement in (42) states; see Figure 1. For comparison, in a second run we chose τ (η) = 1 − log(η 2 2πm 2 ( e 2 ) m ) where m is the amount of data points. This way, τ (η)E(|| || 2 ) ∝ ρ K (y, y η ). Now (τ (η)E(|| ||2)) 2 α converges to zero as it should be the case from 42, see Figure 1. At this point we would like to mention that the discrepancy principle using the Ky fan distance and the deterministic one are not completely equivalent since a different way of measuring the noise is used. Typically the stochastic noise level will be smaller (it need to bound 100% of the possible realizations) and the iteration will be stopped later than in the deterministic setup.

Linear Inverse Problems with Besov-space prior
In [17] the lifting strategy was used in a slightly different way. In particular, the Ky Fan metric was used to obtain a novel parameter choice rule. The convergence rates obtained there, however, can also be viewed in the framework of this work. The scope of that paper was to transfer the deterministic convergence results from [9] into the stochastic setting. The seminal paper [9] initiated the investigation of sparsity-promoting regularization for Inverse Problems. Looking for the solution of the linear ill-posed problem between Hilbert spaces X and Y with given noisy data y δ = y + , the regularization strategy was to obtain an approximation x δ α to x † via where Λ is an appropriate index set, w λ > 0 ∀λ ∈ Λ, {ψ λ } λ∈Λ a dictionary (typically an orthonormal basis or frame) in X and 1 ≤ p ≤ 2. Choosing a sufficiently smooth wavelet basis for {ψ λ } λ∈Λ and setting w λ = 2 ζ|λ|p with ζ = s − d( 1 2 − 1 p ) > 0, the penalty term in (44) corresponds to a norm in the Besov space B s p,p (R d ). Formulating the problem of determining x from noisy data y η = y + , ∼ N (0, η 2 I m ), in the Bayesian setting with the distributions π (y δ |x) ∝ exp(− 1 2η 2 ||Ax − y δ || 2 2 ) and π pr (x) ∝ exp(−α 2 ||x|| p B s p,p (R d ) ) and using the maximum a-posteriori solution lead to the formulation where η is the variance of the noise andα can roughly be described as the inverse variance of the prior. The product α :=αη 2 gives the actual regularization parameter. In direct application of Theorem 3.3, the deterministic condition α → 0, δ 2 α → 0 as δ → 0, with δ replaced by ρ K (y, y η ) from (21) translates to the conditions αη 2 → 0, log(η) α → 0 as η → 0, leading to convergence of x MAP to the unique (in case p = 1 the operator is assumed to be injective) solution x † of minimal norm || · || B s p,p (R d ) in the Ky Fan metric. The proof of convergence rates is based on two assumption: where β, C l , C u > 0 and ||x † || B s p,p (R d ) ≤ ρ for some ρ > 0. Combining Proposition 4.5, Proposition 4.6, Proposition 4.7 from [9] it is Translated into the stochastic setting, the right hand side of (46) reads CE(η, m,α) .
Sinceα is a free parameter, we can balance the terms in (50) We can also apply the theory developed in this work to this problem. In the deterministic setting, see [9], it was proposed to chose the regularization parameter α = δ 2 / p . Combining [9,Proposition 4.5] and [9,Proposition 4.7] then yields the rate β ς+β with C l from (47) and some C > 0. Theorem 3.5 then yields in the stochastic setting the parameter choice α ∼ ρ K (y η , y) 2 / p and (52) In the notation of (48) it is for Gaussian noise ∼ N (0, η 2 I m ) ρ K (y η , y) ≤ η m − L m (η), see Proposition (3.6). Since ρ K (y η , y) < E(η, m,α), compare (48) and (53), the convergence rate in (51) is slightly slower than the one in (52), but they share the same order of convergence.

Conclusions
Our goal was to demonstrate how convergence results for Inverse Problems in the deterministic setting can be carried over into the stochastic setting. Using the Ky Fan metric, we have shown that, when only the noise is assumed to be stochastic whereas the other quantities are deterministic, this is is possible in a straight-forward way. Namely, assuming the knowledge of an estimate of ρ K (y, y η ), the convergence results and parameter choice follows from the deterministic setting by replacing δ, which originates from the basic deterministic assumption ||y − y δ || ≤ δ, with ρ K (y, y δ ). We have shown that, under some slight modifications, it is possible to use the expectation as measure for the magnitude of the noise. In a fully stochastic situation, where additionally to the noise other objects might be of stochastic nature, the lifting of deterministic convergence results is possible as well. However, careful analysis is necessary in order to carry the deterministic conditions over into the stochastic setting.