Robust and stable region-of-interest tomographic reconstruction using a robust width prior

Region-of-interest computed tomography (ROI CT) aims at reconstructing a region within the field of view by using only ROI-focused projections. The solution of this inverse problem is challenging and methods of tomographic reconstruction that are designed to work with full projection data may perform poorly or fail when applied to this setting. In this work, we study the ROI CT problem in the presence of measurement noise and formulate the reconstruction problem by relaxing data fidelity and consistency requirements. Under the assumption of a robust width prior that provides a form of stability for data satisfying appropriate sparsity-inducing norms, we derive reconstruction performance guarantees and controllable error bounds. Based on this theoretical setting, we introduce a novel iterative reconstruction algorithm from ROI-focused projection data that is guaranteed to converge with controllable error while satisfying predetermined fidelity and consistency tolerances. Numerical tests on experimental data show that our algorithm for ROI CT is competitive with state-of-the-art methods especially when the ROI radius is small.

1. Introduction. Computed tomography (CT) is a non-invasive scanning method that is widely employed in medical and industrial imaging to reconstruct the unknown interior structure of an object from a collection of projection images. In many applications of CT, one is interested in recovering only a small region-ofinterest (ROI) within the field of view at a high resolution level. Such applications include contrast-enhanced cardiac imaging and surgical implant procedures where it is necessary to ensure the accurate positioning of an implant. The ability of performing accurate ROI reconstruction using only ROI-focused scanning offers several potential advantages, including the reduction of radiation dose, the shortening of scanning time and the possibility of imaging large objects. However, when projections are truncated as is the case for ROI-focused scanning, the reconstruction problem is ill-posed [23] and conventional reconstruction algorithms, e.g., Filtered (a) (b) (c) Figure 1. Illustrations of recoverable regions for truncated projection data: (a) initial DBP methods [25,7,35] require at least one projection view in which the complete object is covered, (b) interior reconstruction is possible given a known subregion [20,34,30,17] and (c) no assumptions are made other than that the shape of the ROI is convex and approximate sparsity within a ridgelet domain (this paper). The gray dashed line indicates the measured area on the detector array for one particular source angle.
Back-Projection, may perform poorly or fail. Further, it is known that the interior problem, where projections are known only for rays intersecting a region strictly inside the field of view, is in general not uniquely solvable [23]. Existing methods for local CT reconstruction typically require restrictions on the geometry and location of the ROI or some prior knowledge of the solution inside the ROI. For instance, analytic ROI reconstruction formulas associated with the differentiated back-projection (DBP) framework [6,25,20] require that there exists a projection angle θ such that for angles in its vicinity, complete (i.e. non-truncated) projection data is available [25,7,35]; hence such formulas may fail if the ROI is located strictly inside the scanned object (see illustration in Fig 1). Other results show that restrictions on the ROI location can be removed provided that the density function to be recovered is known on a subregion inside the ROI ( Fig. 1(b)) or has a special form, e.g., it is piecewise constant inside the ROI [20,34,30,17]. However, even when ROI reconstruction is theoretically guaranteed, stable numerical recovery often requires a regularization, e.g., L 1 -norm minimization of the gradient image [20,30] or singular value decomposition of the truncated Hilbert transform [17,33]. We also recall that methods from deep learning have been recently proposed for problems of tomographic reconstruction from incomplete data, especially for the limited-view case [1,13,29,14]. In these approaches, a neural network is trained to extrapolate the missing projection data. Despite yielding high quality visual results, these methods lack the theoretical framework to give performance guarantees on the reconstruction (e.g., in terms of noise robustness and accuracy).
One major aim of this paper is to derive reconstruction performance guarantees for ROI CT in the setting of noisy projection data. To this end, we introduce a novel theoretical approach based on a robust width prior assumption [2,3], a method guaranteeing a form of geometrical stability for data satisfying an appropriate sparsity condition. Using this framework, we can establish error bounds for reconstruction from noisy data in both the image and projection spaces. A novelty of our approach is that the image and projection data are handled jointly in the recovery, with sparsity priors in both domains. Moreover, as explained below, fidelity and consistency requirements are relaxed to handle the presence of noise leading to an extrapolation scheme for the missing projection data that is guided by the data fidelity and consistency terms. Our implementation of the recovery is then achieved by an iterative, Bregman-type convex optimization algorithm consistent with our theoretical setting.
Iterative algorithms for CT reconstruction found in the literature typically focus on the minimization of a fidelity norm measuring the distance between observed and reconstructed projections within the ROI (i.e., not taking the image domain reconstruction error into account); see, for example, the simultaneous iterative reconstruction technique (SIRT) [15], the maximum likelihood expectation-maximization algorithm (MLEM) [27], the least-squares Conjugate Gradient (LSCG) method [16,18], and Maximum Likelihood for Transmission Tomography (MLTR) [32]. We observed that the performance of these methods on ROI CT reconstruction, in the presence of measurement noise, is increasingly less reliable as the ROI size decreases. To overcome this issue, our method relaxes the consistency requirement of the reconstruction algorithm since measured data may fall outside the range of the forward projection due to the noise. This added flexibility is especially advantageous when another prior, namely the sparsity of solution, is included.
Sparsity assumptions have already been applied in the literature to reduce measured data and mitigate the effect of noise in the recovery from linear measurements [11,22]. However, most theoretical results are based on randomized measurements that are different in nature from the deterministic way the projection data is obtained in CT. Nevertheless, it is generally agreed that sparsity is a powerful prior in the context of tomography when conventional recovery methods lead to ill-posedness [5,8]. In this paper, we incorporate an assumption of approximate sparseness by minimizing the 1 norm of the ridgelet coefficients of the reconstructed image (i.e., the 1D wavelet coefficients of the reconstructed projection data, where the wavelet transform is applied along the detector array) while retaining given tolerances for fidelity and consistency of the recovered data. One of our main results is that we can guarantee that our iterative algorithm reaches an approximate minimizer with the prescribed tolerances within a finite number of steps. To validate our method, we also demonstrate the application of our algorithm for ROI reconstruction from noisy projection data in the 2D fan-beam reconstruction. Our approach yields highly accurate reconstructions within the ROI and outperforms existing algorithms especially for ROIs with a small radius.
The remainder of the paper is organized as follows. In Sec. 2, we formulate the ROI reconstruction problem and introduce a notion of data fidelity and consistency in the context of ROI CT. In Sec. 3, we recall the definition of robust width and prove that, under appropriate sparsity assumptions on the data, it is feasible to find an approximate solution of a noisy linear problem with controllable error. Based on this formulation, in Sec. 4 we introduce a convex optimization approach to solve the ROI CT problem from noisy truncated projections and show that we can control reconstruction error under predetermined fidelity and consistency tolerances. We finally present numerical demonstrations of our method in Sec. 5.
2. Data consistency and fidelity in the ROI reconstruction problem. In this section, we introduce the main notations used for ROI CT and show how the non-uniqueness of the reconstruction problem in the presence of noise leads to two requirements, i.e., data fidelity and data consistency requirements, that cannot necessarily be satisfied at the same time but can be relaxed.
Let W denote a projection operator mapping a density function f on R 2 into a set of its linear projections. A classical example of such operator is the Radon transform, defined by where (θ, τ ) = {x ∈ R 2 : x · e θ = τ } is the line that is perpendicular to e θ = (cos θ, sin θ) ∈ S 1 with (signed) distance τ ∈ R from the origin. This transform maps f ∈ L 2 (R 2 ) into the set of its line integrals defined on the tangent space of the circle Another classical example of a projection operator W is the fan-beam transform [24]. The goal of ROI tomography is to reconstruct a function from its projections within a subregion inside the field of view, while the rest of the image is ignored. That is -as shown in Fig. 2 -let us denote the ROI by S ⊂ R 2 and define the subset of the tangent space associated with the rays that intersect the ROI S by: corresponding to the set P(S), we define the mask function M on T by we then formulate the ROI reconstruction problem as the problem of reconstructing f restricted to S from truncated projection data: (2) y 0 (θ, τ ) =W f (θ, τ ), for (θ, τ ) ∈ P(S), whereW is the composition of the mask function M and the projection operator W , i.e.,W = M W . For simplicity, we assume in the following that the ROI S ⊂ R 2 is a disk with center p ROI ∈ R 2 and radius R ROI > 0. In this case, it is easy to derive that P(S) = {(θ, τ ) ∈ T : |τ − p ROI · e θ | < R ROI }. The situation where S is a disk is natural in practical situations due to the circular trajectory of the x-ray source in many projection geometries. 1 The inversion of W is ill-posed in general and the ill-posedness may be more severe (i.e., the condition number of the problem is higher) in the situation where the projection data are incomplete, as in the case given by (2). It is known that the so-called interior problem, where W f (θ, τ ) is given only for |τ | ≤ a and a is a positive constant, is not uniquely solvable in general [23]. A unique solution of the interior problem can be guaranteed if the density function f is assumed to be piece-wise polynomial in the interior region [19]. However, this assumes the ideal case of a noiseless acquisition and it leaves the problem of stability in the presence of noise open.
To address the problem of reconstructing f from (2) in a stable way, a natural choice is to look for a solutionf ∈ L 2 (R 2 ) that minimizes the L 2 error (cf. [35]) 1 More general convex ROIs can be handled by calculating the minimal enclosing disk for this ROI and reconstructing the image inside this disk. A general solution of this minimization problem is given by: where v ∈ L 2 R 2 , I is the identity operator and (·) + denotes the Moore-Penrose pseudoinverse operator. The solutionf is not unique unless N W = {0} (here N {·} denotes the null space), because thenW +W = I. Often, an additional regularity assumption is made to ensure uniqueness by minimizing f 2 2 = W + y 0 . This amounts to setting v = 0 and consequentlyf =W + y 0 .
In this paper, we investigate the reconstruction from truncated projections in the presence of measurement noise. In this case, the ROI reconstruction problem consists in recovering an image f from the noisy truncated data: for (θ, τ ) ∈ P(S), where ν denotes the noise term. If M ν = 0, then M W f −y 0 2 2 > 0 and an arbitrary extension y of y 0 may not be in the range of the truncated projection operator W ; consequently, y − W f 2 > 0. In the following, we will use y as a separate auxiliary variable, denoting the extrapolated measurements of y 0 . Setting tolerances, we formulate the two following constraints: where α is a data fidelity parameter (chosen in accordance with the noise level) and β is a data consistency parameter. By setting α = 0, a solution can be obtained that maximizes the data fidelity with respect to the measurement data, i.e., M y = y 0 . and y − W f 2 ≤ β, as well as solutions favored by the sparsity prior y (see Sec. 3). Data consistent solutions may have a nonzero data fidelity, while data fidelity solutions are in general not consistent. We control the reconstruction error by combining fidelity and consistency constraints with the additional sparsity assumption.
Alternatively, setting β = 0 maximizes the data consistency of the solution, i.e., y = W f , so that the data fidelity constraint becomes M W f − y 0 2 2 ≤ α. In the presence of noise, the parameters α and β generally cannot be set to zero simultaneously, as y 0 may not be in the range ofW .
The selection of α and β allows us to trade off data fidelity versus data consistency, as illustrated in Fig. 3. By letting β > 0, data consistency errors are allowed and the solution space of the ROI problem is effectively enlarged, giving us better control on the denoising and extrapolation process of y 0 . Another advantage of our approach is that the leverage of data fidelity and data consistency constraints with the additional sparsity assumption enables us to establish performance guarantees in both the image and projection spaces. Our method will find a pair (y , f ) where y is an approximate extension of y 0 in (5) according to the data consistency constraint and f is an approximate image reconstruction according to the data fidelity constraint (6). Specifically, we will define an iterative algorithm based on convex optimization which is guaranteed to provide an approximation ỹ ,f of (y , f ) in a finite number of iterations and is within a predictable distance from the ideal noiseless solution of the ROI problem. We remark that, due to the non-injectivity of the truncated projection operator, data fidelity in the projection domain does not automatically imply a low reconstruction error in the image domain. To address this task, we introduce a compressed sensing framework to control data fidelity and reconstruction error by imposing an appropriate sparsifying norm on both the projection data and the reconstructed image.
3. Application of robust width to ROI CT reconstruction. The robust width property was originally introduced by Cahill and Mixon [2] as a geometric criterion that characterizes when the solution to a convex optimization problem provides an accurate approximate solution to an underdetermined, noise-affected linear system by assuming an additional structure of the solution space [3]. This property is related to the Restricted Isometry Property (RIP) that is widely used in compressed sensing, especially in conjunction with randomized measurements [11]. We adapt the result by Cahill and Mixon to our setting because it offers more flexibility than the usual assumptions in compressed sensing and leads to an algorithmic formulation of ROI reconstruction that includes a sparsifying norm and a performance guarantee.
We start by defining a notion of compressed sensing space that provides the appropriate approximation space for the solutions of the noisy ROI reconstruction problem.
with bound L consists of a Hilbert space H, a subset A ⊆ H and a norm or semi-norm · on H such that For every a ∈ A and z ∈ H, there exists a decomposition z = z 1 + z 2 such that a + z 1 = a + z 1 with z 2 ≤ L z 2 .
Remark 1. We have the upper bound

Remark 2.
Suppose H is a Hilbert space, A ⊆ H, · is a norm such that For every a ∈ A and z ∈ H, there exists a decomposition z = z 1 + z 2 such that z 1 , z 2 = 0 and Then H, A, · is a CS space with bound L. This follows from the observation that, since some z 2 ∈ A is orthogonal to z 1 , then Example 1. A standard example for a CS space is that of K-sparse vectors [2]. This structure is defined by choosing an orthonormal basis {ψ j } space H. The set A consists of vectors that are linear combinations of K basis vectors. The norm · is given by In this case, for any a ∈ A, which is the linear combination of {ψ j } j∈J with |J| ≤ K and z ∈ H, we can then choose the decomposition z 2 = j∈J z, ψ j ψ j and z 1 = z − z 2 . We then see that (8) holds and, since a is in the subspace spanned by {ψ j } j∈J , by the equivalence of 1 and 2 -norms for finite sequences, a ≤ √ K a 2 .
The following theorem extends a result in [2]. In particular, we consider an approximate solutionx of x such that x 2 ≤ x 2 + δ, in order to account for data fidelity, consistency and sparsity trade-offs (see further).
be a CS space with bound L and Φ : H →H a linear operator satisfying the RWP over B , with ρ, η > 0.
For x ∈ H, > 0, e ∈H with e 2 ≤ , let x be a solution of Then for every x ∈ H and a ∈ A, any approximate solutionx of x such that for some γ > 2 and with C 1 = 2/η.
We now adapt the CS reconstruction theorem (Theorem 3.3) to the CT ROI problem. We adopt the same notations as in Sec. 2 for the symbols W , M andỹ 0 and we treat the projection data f and image space data y jointly.

Theorem 3.4. Interior and exterior ROI reconstruction.
Let Suppose H, A, · is a CS space and Φ : H →H satisfies the (ρ, η)-RWP over the ball B . Then, for every y , where aforementioned conditions for C 1 , C 2 , ρ, γ are applicable.
Additionally, for all y , f ∈ E any approximate solution ỹ ,f to (10) with Remark 3. Suppose a solution (y , f ) exists. If the ROI problem has a unique solution (y , f ) ∈ H, then Theorem 3.4 shows that this solution is close to any y , f ∈ A, with error controlled by α 2 + β 2 .
If we do not know whether the ROI problem has a unique solution, but y , f ∈ E, the space of consistent functions, satisfying data fidelity, then also in this case our solution (y , f ) is close to y , f as stated above.
In case we only obtain an approximate solution ỹ ,f with ỹ ,f ≤ (y , f ) + δ then, ỹ ,f is close to y , f , with an approximation error controlled by 2 η Note that this approximation error has three terms: the first term depends on the data fidelity and consistency parameters, the second term is determined by the (worst-case) approximate sparsity of any plausible solution a ∈ A and the third therm is due to the approximate minimizer. In practice, exact numerical minimization of (10) is difficult to achieve, as we will point out in Sec. 4.
Theorem 3.4 expresses error bounds for complete image and projection data, i.e., irrespective of the ROI, therefore the theorem applies to joint interior and exterior ROI reconstruction. However, the reconstruction of the exterior of the ROI is severely ill-posed: due to the nullspace of Φ, impractically strong sparseness assumptions are required to recover a stable reconstruction. Therefore, for a CS space with RWP (ρ, η), we may expect the error bounds C 1 and C 2 to be very large, especially when the radius of the ROI is small. To correct this situation, we restrict RWP within a linear subspace of the Hilbert space so that error bounds are obtained for linear projections onto a subspace spanned by a confined area such as the ROI, leading to interior -only ROI reconstruction. The next theorem gives the framework for constructing such projections.

Proof. See Appendix A.2.
Remark 4. Theorem 3.5 generalizes Theorem 3.4 to give performance guarantees for stable reconstruction within the ROI: for a CS space with (ρ, η)-RWP, the operators M y and M f define a subspace H and determine the error bounds through (13). We have some flexibility in the choice of these operators. In particular, M = M M y implies that the orthogonal projection M y is subject to M y 2 ≤ M y y 2 ≤ y 2 .
For practical algorithms using fan-beam and cone-beam geometries, calculating M f based on the relationship W M f = M y W is not trivial. The dependence of the solution method on M f can be avoided by solving (12) as follows: (14) (y , f ) = arg min where the minimum is now taken over H instead of over H . As we show in Example 2 below, we can always choose M y such that P H (y, f ) ≤ (y, f ) . Correspondingly, Moreover, since H ⊂ H, we find that min (y,f )∈H (y, f ) = min (y,f )∈H (y, f ) , i.e., (14) gives the exact minimizer for (12). In case that, due to the use of a numerical method, we only obtain an approximate solution to (14) with ỹ ,f ≤ (y , f ) + δ where ỹ ,f is the true solution of (14), we have again the approximation bounds following from Theorem 3.4: Example 2. We construct a CS space (H , A, · ) for which Φ , given by (11), satisfies the (ρ, η)-RWP over B .
Let H = (y, f ) : y ∈ 2 Z 2 , f ∈ 2 Z 2 . Let T be the discrete wavelet transform on 2 (Z), with compactly supported wavelets, acting along the first dimension of y (i.e., along the detector array position). 2 Define the sparsity transform (15) Ψ = T T W , and the sparsity norm: where Ψ (y, f ) 1,2 is a mixed 1,2 norm. Here, i is the index associated with the offset and j is index associated with the angle. With this natural choice of sparsifying norm ( 1 norm of the wavelet coefficients in the projection domain) the transform basis functions can be associated with the ridgelets [4]. Additionally, define the solution space A as: where (·) :,j denotes a vector slice. Then for (y, f ) ∈ A, we have that with C the upper frame bound of Ψ, or equivalently, Next, we need to construct a projection operator M y such that M = M M y . For our practical algorithm (see Remark 4), we require that P H (y, f ) ≤ (y, f ) .
In fact, this requirement can be accomplished by selecting a diagonal projection operator acting in the ridgelet domain for M y , i.e., M y = T * D y T with D y a diagonal projection:

With this choice it follows that
To ensure that M T * D y T = M y so that also M y 2 ≤ M y y 2 = D y T y 2 , one can design D y as an indicator mask function that selects all wavelet and scaling functions that overlap with the ROI projection P(S).
By Theorem 3.4, a solution or approximate solution of this problem will yield an approximate solution of the ROI problem with a predictable error.

Algorithm 1.
For y, f , y (i) , f (i) ,p andq ∈ H, let the Bregman divergence be (see [26]): With M and W as defined in Sec. 2, let the objective function H (y, f ; y 0 ) be where α > 0 and β > 0. The Bregman iteration is then defined as follows: We have the following result.
Theorem 4.1. Letδ > 0. Assume that there exists a point (y , f ) for which H y , f ; y 0 <δ. Then, if H y (0) , f (0) ; y 0 >δ, for any τ > 1 there exists i ∈ N such that H y i , f i ; y 0 ≤ τδ. More specifically To prove this result, we need the following observation showing that under the Bregman iteration, the Bregman divergence and the objective function are both non-increasing. The following result is adapted from [26,Proposition 3.2]. Proposition 1. Letδ > 0. Assume that, for a given (y , f ), we have that H y , f ; y 0 <δ. Then, as long as H y (i) , f (i) ; y 0 >δ, the Bregman divergence between (y , f ) and (y (i) , f (i) ) is non increasing, i.e., (20) Dp and the objective function H, given by (18), is also non increasing, For implementation purposes, it is convenient to replace the Bregman iteration with the following Bregman iteration variant [31]: The following Proposition shows that the minimization in equations (19) and (22) are equivalent.

Proposition 2.
The minimum values obtained in each iteration step of Algorithm 2, are identical to the corresponding values for Algorithm 1:
In case β = 0, we minimize H (y, f ; y 0 ) = 1 2 M y − y 0 2 while imposing data consistency through the constraint y = W f . Similarly, Theorem 4.1 will imply that M y (i ) − y 0 2 ≤ 2τδ = α. In case α = 0, we minimize H (y, f ; y 0 ) = 1 2 y − W f 2 while enforcing strict data fidelity, i.e., M y = y 0 . Then Theorem 4.1 will imply that y The character of the reconstruction obtained in the limiting cases is illustrated in Fig. 3.
The following proposition outlines a practical approach to select the data fidelity and consistency parameters based on the measured projection data y 0 . When α and β are within suitable ranges, it can be guaranteed that the conditions of Proposition 1 are satisfied.

5.
Results and discussion. In this section, we evaluate the ROI reconstruction performance of the convex optimization algorithm SIRA from Sec. 4 as a function of data fidelity and consistency parameters as well as the size of the ROI radius. The numerical tests were run on an Intel Core I7-5930K CPU with NVIDIA Geforce RTX 2080 GPU, with 8 GB RAM GPU memory. The algorithms were implemented in Quasar [12], which provides a heterogeneous GPU/CPU programming environment on top of CUDA and OpenMP.
5.1. ROI reconstruction results. The X-O CT system (Gamma Medica-Ideas, Northridge, California, USA) was used to obtain in vivo preclinical data. The tube current is determined automatically during calibration to ensure that the dynamic range of the detector is optimally used. For a 50 µm spot size at 70 kVp, this amounts to 145 µA tube current. Fan-beam data were generated by retaining only the central detector row. Two contrast-enhanced mice were scanned in-vivo, resulting in two data sets (see Fig. 5): 1. Preclinical -lungs: A first mouse was injected with a lipid-bound iodine-based contrast agent (Fenestra VC, ART, Canada), to increase the vascular contrast. 2048 projection views were obtained over 2π.  2. Preclinical -abdomen: A second mouse was administered 0.5 mL gastrografin intrarectally, to increase the soft tissue contrast in the abdomen. 1280 projection views were obtained over 2π.
For abdomen, we have used 640 projection views (out of 1280) and we have sub-sampled the projection data by a factor 2 to the dimensions 640 × 560. For lungs, we have used 512 projection views (out of 1024) and we have sub-sampled the projection data by a factor 2 to the dimensions 512 × 560. The resulting acquisition geometry parameters for both data sets are given in Table 1.
For benchmark comparison, we have considered the following five reconstruction methods that include also regularized reconstruction methods: 1. Least-squares conjugate gradient (LSCG), restricted to the projection ROI P(S) (see (4)). 2. Maximum likelihood expectation maximization (MLEM) [27,35], restricted to the projection ROI P(S). 3. Differentiated back-projection (DBP) from [25], where the Hilbert inversion is performed in the image domain using the 2D Riesz transform, as described in [10]. 4. Compressed sensing based ROI reconstruction with Total Variation regularization (CS-TV). [21] 5. Compressed sensing based ROI reconstruction with ridgelet based regularization (CS-ridgelet). 3 Among methods included, the CS-ridgelet method is the most similar to our method, because it minimizes the cost criterion ||W f − y 0 || 2 + λ|Ψf | where Ψ is a ridgelet transform. In contrast to our method, CS-ridgelet estimates a single variable f ; the reconstructed projection data y is not estimated, therefore there is no data consistency prior. To ease the comparison with our method, the ridgelet transform is computed by applying a 1D wavelet transform to the fan-beam projection data.
To motivate the choice of ridgelets as compared to the more common TV regularization, we remark that the ridgelet transform offers distinct potential advantages compared to TV. It is a multiscale and multidirectional transform, allowing features (e.g., edges, discontinuities) to be regularized at different scales and orientations. In addition, unlike TV regularization that assumes the image to be piecewise constant, the ridgelet transform is more flexible leading to more 'natural' looking reconstruction for real (i.e., non-phantom) CT images [28].
To enable a quantitative analysis, we have compared the ROI reconstruction against the full reconstruction obtained using the LSCG method by calculating the pseudoinverse. Since the ROI reconstruction method is a data extrapolation problem in the projection domain, it is reasonable to expect that the least squares linear reconstruction of the fully sampled projection domain is a good reference for comparison. As a figure of merit, we have used the peak-signal-to-noise ratio (PSNR) evaluated inside the ROI S, which is consistent with the motivation of ROI CT to recover the image only inside the ROI. As we will show below, our method achieves a nearly exact reconstruction inside the ROI. Outside the ROI, the reconstructed image is unsuitable for diagnostic purposes and only speculative in nature since, for every point outside the ROI, in general only a small fraction of the rays passing through the point are available in general.
In Fig. 6, we compare the fan-beam ROI reconstruction performance on the lung and abdomen images of Fig. 5 for different values of the ROI radius using different methods. In particular, we consider our SIRA algorithm with two different choices of data fidelity and consistency parameters: SIRA-FIDEL (β = 0, α = 1 4 u) and SIRA (α = 1 4 u, β = u) with u = I −WW + y 0 2 2 . We remark that, because the average energy level for the MLEM and DBP methods cannot be determined correctly from the truncated projections [35], these methods do not yield correct quantitative results. For this reason, the PSNR calculation for MLEM and DBP was adjusted by discarding the difference in the mean, in order to obtain meaningful PSNR values. This was achieved by computing PSNR(y A , y B ) = 10 log 10 whereȳ is the average of the components of the vector y. The plots in Fig. 6 show that our ROI reconstruction method SIRA-FIDEL performs significantly better than all other methods including the regularized reconstructions CS-TV and CS-ridgelets. For very small ROI radii, the improvement with respect to competing algorithms is over 20 dB in Fig. 6(a) and over 7 dB in Fig. 6(b). The superior performance of SIRA can be attributed to the fact that the missing projection data is being iteratively estimated and regularized in our approach.
(a) Preclinical -lungs (b) Preclinical -abdomen Figure 6. PSNR results for 2D fan-beam ROI reconstruction with increasing radius using (a) lungs and (b) adbomen. The PSNR is calculated inside the ROI.
In Fig. 7, we show the visual comparison for the fan-beam ROI reconstruction of the lung and abdomen images in Fig. 5 using different methods. The parameters (α = 1 30 u, β = 4 3 u) in Fig. 5(b) correspond to the maximal PSNR performance in the range 0 ≤ β ≤ u and 0 ≤ α ≤ u (in steps of 0.1 for both β and α).
A visual comparison to Fig. 7(f) yields that the SIRA algorithm reconstructs the interior of the ROI more accurately than competing method including CS-ridgelet, the next best method, resulting in a significantly higher PSNR. We also find that, by relaxing the data consistency constraint in (Fig. 5(b)), the PSNR performance can be improved compared to the data fidelity solution (Fig. 5(a)). We note that additional improvements are possible: for example, by using curvelets [5] or shearlets [9] for regularization, rather than ridgelets. However this falls outside the scope of the paper and will be investigated elsewhere.
Finally, we compare the visual performance of SIRA and LSCG for an increasing ROI radius. The results in Fig. 8 show that, even for a very small ROI radius of 1.2 mm, the reconstruction quality for SIRA remains very satisfactory, while LSCG causes a cupping artifact at the boundary of the ROI. In terms of PSNR, SIRA offers significant performance gains compared to LSCG. Since LSCG optimizes over all data consistent solutions (y = W f ), this result again suggests that relaxing the consistency requirement in the right setting, can enhance the reconstruction performance in the image domain.
Due to the robust width assumption (see Sec. 3), our theoretical framework offers performance guarantees for the image domain reconstruction performance. Even though Theorem 3.4 gives performance guarantees for both within the ROI (interior reconstruction) and outside of the ROI (exterior reconstruction), the latter task is significantly more difficult and requires additional prior knowledge. An interesting avenue for future research is the inclusion of priors based on e.g., combined sparsity models (ridgelets, wavelets and shearlets) and deep learning models (e.g., based on U-Net [1] and ResNet architectures). 6. Conclusion. We have introduced a novel framework for ROI CT reconstruction from noisy projection data. To deal with the presence of noise, our method relaxes the data fidelity and consistency conditions. Based on a robust width assumption that guarantees stable solution of the ROI CT reconstruction problem under appropriate sparsity norms on the data, we have established performance bounds in both the image and projection domains. Using this framework, we introduced a ROI CT reconstruction algorithm called sparsity-induced iterative reconstruction algorithm (SIRA), that reaches an approximately sparse solution in a finite number of steps while satisfying predetermined fidelity and consistency tolerances. Our experimental results using fan-beam acquisition geometry suggest that the ROI reconstruction performance depends on the ROI radius. Visual and quantitative results confirm that our algorithm achieves very accurate reconstruction for relatively small ROI radii and demonstrate that our algorithm is very competitive against both conventional and state-of-the-art reconstruction methods.
Appendix A. Additional proofs.
A.1. Proof of Theorem 3.3. Our proof of Theorem 3.3 is adapted from [2].
Proof. Let e = (0, ν) ∈H, x = (y, f ) ∈ H, x = y , f ∈ E. Equation (11) yields that where we used that M M y = M . Taking into account that W M f = M y W, we find that using y , f ∈ E ⊂ H, i.e. y = W f and M y =ỹ 0 . Calculating the squared norm, we have: Then, due to the fact that H , A, · is a CS space, and because of the (ρ, η)-RWP of Φ over the ball B , we can apply again Theorem 3.3, with = α 2 + β 2 , for any x ∈ E ∩ H . In particular, for any x / ∈ H \ E for and for (y , f ) ∈ H minimizing (12), we have that The conclusion then follows noting that M y 2 = M M y y 2 ≤ M y y 2 such that M ỹ − y 2 ≤ M y ỹ − y 2 .
Similarly, due to the update equation y