Inverse Problems & Imaging
2014 , Volume 8 , Issue 1
Select all articles
This paper concerns the reconstruction of an anisotropic conductivity tensor in an elliptic second-order equation from knowledge of the so-called power density functionals. This problem finds applications in several coupled-physics medical imaging modalities such as ultrasound modulated electrical impedance tomography and impedance-acoustic computerized tomography.
We consider the linearization of the nonlinear hybrid inverse problem. We find sufficient conditions for the linearized problem, a system of partial differential equations, to be elliptic and for the system to be injective. Such conditions are found to hold for a lesser number of measurements than those required in recently established explicit reconstruction procedures for the nonlinear problem.
We apply an ``exterior approach" based on the coupling of a method of quasi-reversibility and of a level set method in order to recover a fixed obstacle immersed in a Stokes flow from boundary measurements. Concerning the method of quasi-reversibility, two new mixed formulations are introduced in order to solve the ill-posed Cauchy problems for the Stokes system by using some classical conforming finite elements. We provide some proofs for the convergence of the quasi-reversibility methods on the one hand and of the level set method on the other hand. Some numerical experiments in $2D$ show the efficiency of the two mixed formulations and of the exterior approach based on one of them.
This paper presents the Moreau envelope viewpoint for the L1/TV image denoising model. The main algorithmic difficulty for the numerical treatment of the L1/TV model lies in the non-differentiability of both the fidelity and regularization terms of the model. To overcome this difficulty, we propose five modified L1/TV models by replacing one or two non-differentiable functions in the L1/TV model with their corresponding Moreau envelopes. We prove that several existing approaches for the L1/TV model essentially solve some of the modified models, but not the original L1/TV model. Algorithms for the L1/TV model and its five variants are proposed under a unified framework based on fixed-point equations (via the proximity operator) which characterize the solutions of the models. Depending upon whether we smooth the regularization term or not, two different types of proximity algorithms are presented. The convergence rates of both types of the algorithms are improved significantly by exploring either the strategy of the Gauss-Seidel iteration, or the FISTA, or both. We compare the performance of various modified L1/TV models for the problem of impulse noise removal, and make recommendations based on our numerical experiments for using these models in applications.
We present a new approach to solve the inverse source problem arising in Fluorescence Tomography (FT). In general, the solution is non-unique and the problem is severely ill-posed. It poses tremendous challenges in image reconstructions. In practice, the most widely used methods are based on Tikhonov-type regularizations, which minimize a cost function consisting of a regularization term and a data fitting term. We propose an alternative method which overcomes the major difficulties, namely the non-uniqueness of the solution and noisy data fitting, in two separate steps. First we find a particular solution called the orthogonal solution that satisfies the data fitting term. Then we add to it a correction function in the kernel space so that the final solution fulfills other regularity and physical requirements. The key ideas are that the correction function in the kernel has no impact on the data fitting, so that there is no parameter needed to balance the data fitting and additional constraints on the solution. Moreover, we use an efficient basis to represent the source function, and introduce a hybrid strategy combining spectral methods and finite element methods in the proposed algorithm. The resulting algorithm can dramatically increase the computation speed over the existing methods. Also the numerical evidence shows that it significantly improves the image resolution and robustness against noise.
We introduce a technique for recovering a sufficiently smooth function from its ray transforms over rotationally related curves in the unit disc of 2-dimensional Euclidean space. The method is based on a complexification of the underlying vector fields defining the initial transport and inversion formulae are then given in a unified form. The method is used to analyze the attenuated ray transform in the same setting.
Electrical impedance tomography (EIT) is a non-invasive imaging modality in which the internal conductivity distribution is reconstructed based on boundary voltage measurements. In this work, we consider the application of EIT to non-destructive testing (NDT) of materials and, especially, crack detection. The main goal is to estimate the location, depth and orientation of a crack in three dimensions. We formulate the crack detection task as a shape estimation problem for boundaries imposed with Neumann zero boundary conditions. We propose an adaptive meshing algorithm that iteratively seeks the maximum a posteriori estimate for the shape of the crack. The approach is tested both numerically and experimentally. In all test cases, the EIT measurements are collected using a set of electrodes attached on only a single planar surface of the target -- this is often the only realizable configuration in NDT of large building structures, such as concrete walls. The results show that with the proposed computational method, it is possible to recover the position and size of the crack, even in cases where the background conductivity is inhomogeneous.
This article is devoted to the convergence analysis of a special family of iterative regularization methods for solving systems of ill--posed operator equations in Hilbert spaces, namely Kaczmarz-type methods. The analysis is focused on the Landweber--Kaczmarz (LK) explicit iteration and the iterated Tikhonov--Kaczmarz (iTK) implicit iteration. The corresponding symmetric versions of these iterative methods are also investigated (sLK and siTK). We prove convergence rates for the four methods above, extending and complementing the convergence analysis established originally in [22,13,12,8].
In bioluminescence tomography the location as well as the radiation intensity of a photon source (marked cell clusters) inside an organism have to be determined given the outside photon count. This inverse source problem is ill-posed: it suffers not only from strong instability but also from non-uniqueness. To cope with these difficulties the source is modeled as a linear combination of indicator functions of measurable domains leading to a nonlinear operator equation. The solution process is stabilized by a Tikhonov like functional which penalizes the perimeter of the domains. For the resulting minimization problem existence of a minimizer, stability, and regularization property are shown. Moreover, an approximate variational principle is developed based on the calculated domain derivatives which states that there exist smooth almost stationary points of the Tikhonov like functional near to any of its minimizers. This is a crucial property from a numerical point of view as it allows to approximate the searched-for domain by smooth domains. Based on the theoretical findings numerical schemes are proposed and tested for star-shaped sources in 2D: computational experiments illustrate performance and limitations of the considered approach.
We consider the inverse problem of finding sparse initial data from the sparsely sampled solutions of the heat equation. The initial data are assumed to be a sum of an unknown but finite number of Dirac delta functions at unknown locations. Point-wise values of the heat solution at only a few locations are used in an $l_1$ constrained optimization to find the initial data. A concept of domain of effective sensing is introduced to speed up the already fast Bregman iterative algorithm for $l_1$ optimization. Furthermore, an algorithm which successively adds new measurements at specially chosen locations is introduced. By comparing the solutions of the inverse problem obtained from different number of measurements, the algorithm decides where to add new measurements in order to improve the reconstruction of the sparse initial data.
In this paper, we presented an efficient algorithm to implement the regularization reconstruction of SPECT. Image reconstruction with priori assumptions is usually modeled as a constrained optimization problem. However, there is no efficient algorithm to solve it due to the large scale of the problem. In this paper, we used the superiorization of the expectation maximization (EM) iteration to implement the regularization reconstruction of SPECT. We first investigated the convergent conditions of the EM iteration in the presence of perturbations. Secondly, we designed the superiorized EM algorithm based on the convergent conditions, and then proposed a modified version of it. Furthermore, we gave two methods to generate desired perturbations for two special objective functions. Numerical experiments for SPECT image reconstruction were conducted to validate the performance of the proposed algorithms. The experiments show that the superiorized EM algorithms are more stable and robust for noised projection data and initial image than the classic EM algorithm, and outperform the classic EM algorithm in terms of mean square error and visual quality of the reconstructed images.
The equations of magneto-photoelasticity are derived for a nonhomogeneous background isotropic medium and for a variable gyration vector. They coincide with Aben's equations in the case of a homogeneous background medium and of a constant gyration vector. We obtain an explicit linearized formula for the fundamental solution under the assumption that variable coefficients of equations are sufficiently small. Then we consider the inverse problem of recovering the variable coefficients from the results of polarization measurements known for several values of the gyration vector. We demonstrate that the data can be easily transformed to a family of Fourier coefficients of the unknown function if the modulus of the gyration vector is agreed with the ray length.
The grid method is one of the techniques available to measure in-plane displacement and strain components on a deformed material. A periodic grid is first transferred on the specimen surface, and images of the grid are compared before and after deformation. Windowed Fourier analysis-based techniques permit to estimate the in-plane displacement and strain maps. The aim of this article is to give a precise analysis of this estimation process. It is shown that the retrieved displacement and strain maps are actually a tight approximation of the convolution of the actual displacements and strains with the analysis window. The effect of digital image noise on the retrieved quantities is also characterized and it is proved that the resulting noise can be approximated by a stationary spatially correlated noise. These results are of utmost importance to enhance the metrological performance of the grid method, as shown in a separate article.
Many effective models are available for segmentation of an image to extract all homogenous objects within it. For applications where segmentation of a single object identifiable by geometric constraints within an image is desired, much less work has been done for this purpose. This paper presents an improved selective segmentation model, without `balloon' force, combining geometrical constraints and local image intensity information around zero level set, aiming to overcome the weakness of getting spurious solutions by Badshah and Chen's model . A key step in our new strategy is an adaptive local band selection algorithm. Numerical experiments show that the new model appears to be able to detect an object possessing highly complex and nonconvex features, and to produce desirable results in terms of segmentation quality and robustness.
We propose an efficient nonlinear approximation scheme using the Polyharmonic Local Sine Transform (PHLST) of Saito and Remy combined with an algorithm to tile a given image automatically and adaptively according to its local smoothness and singularities. To measure such local smoothness, we introduce the so-called local Besov indices of an image, which is based on the pointwise modulus of smoothness of the image. Such an adaptive tiling of an image is important for image approximation using PHLST because PHLST stores the corner and boundary information of each tile and consequently it is wasteful to divide a smooth region of a given image into a set of smaller tiles. We demonstrate the superiority of the proposed algorithm using Antarctic remote sensing images over the PHLST using the uniform tiling. Analysis of such images including their efficient approximation and compression has gained its importance due to the global climate change.
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]