
ISSN:
1930-8337
eISSN:
1930-8345
All Issues
Inverse Problems & Imaging
February 2011 , Volume 5 , Issue 1
Select all articles
Export/Reference:
2011, 5(1): 1-17
doi: 10.3934/ipi.2011.5.1
+[Abstract](2938)
+[PDF](438.9KB)
Abstract:
We investigate iterated Tikhonov methods coupled with a Kaczmarz strategy for obtaining stable solutions of nonlinear systems of ill-posed operator equations. We show that the proposed method is a convergent regularization method. In the case of noisy data we propose a modification, the so called loping iterated Tikhonov-Kaczmarz method, where a sequence of relaxation parameters is introduced and a different stopping rule is used. Convergence analysis for this method is also provided.
We investigate iterated Tikhonov methods coupled with a Kaczmarz strategy for obtaining stable solutions of nonlinear systems of ill-posed operator equations. We show that the proposed method is a convergent regularization method. In the case of noisy data we propose a modification, the so called loping iterated Tikhonov-Kaczmarz method, where a sequence of relaxation parameters is introduced and a different stopping rule is used. Convergence analysis for this method is also provided.
2011, 5(1): 19-35
doi: 10.3934/ipi.2011.5.19
+[Abstract](2714)
+[PDF](582.3KB)
Abstract:
Detecting and identifying targets or objects that are present in hyperspectral ground images are of great interest. Applications include land and environmental monitoring, mining, military, civil search-and-rescue operations, and so on. We propose and analyze an extremely simple and efficient idea for template matching based on $l_1$ minimization. The designed algorithm can be applied in hyperspectral classification and target detection. Synthetic image data and real hyperspectral image (HSI) data are used to assess the performance, with comparisons to other approaches, e.g. spectral angle map (SAM), adaptive coherence estimator (ACE), generalized-likelihood ratio test (GLRT) and matched filter. We demonstrate that this algorithm achieves excellent results with both high speed and accuracy by using Bregman iteration.
Detecting and identifying targets or objects that are present in hyperspectral ground images are of great interest. Applications include land and environmental monitoring, mining, military, civil search-and-rescue operations, and so on. We propose and analyze an extremely simple and efficient idea for template matching based on $l_1$ minimization. The designed algorithm can be applied in hyperspectral classification and target detection. Synthetic image data and real hyperspectral image (HSI) data are used to assess the performance, with comparisons to other approaches, e.g. spectral angle map (SAM), adaptive coherence estimator (ACE), generalized-likelihood ratio test (GLRT) and matched filter. We demonstrate that this algorithm achieves excellent results with both high speed and accuracy by using Bregman iteration.
2011, 5(1): 37-57
doi: 10.3934/ipi.2011.5.37
+[Abstract](2794)
+[PDF](500.4KB)
Abstract:
We present an optimal strategy for the relative weighting of multiple data modalities in inverse problems, and derive the maximum compatibility estimate (MCE) that corresponds to the maximum likelihood or maximum a posteriori estimates in the case of a single data mode. MCE is not explicitly dependent on the noise levels, scale factors or numbers of data points of the complementary data modes, and can be determined without the mode weight parameters. We also discuss discontinuities in the solution estimates in multimodal inverse problems, and derive a corresponding self-consistency criterion. As a case study, we consider the problem of reconstructing the shape and the spin state of a body in $\R^3$ from the boundary curves (profiles) and volumes (brightness values) of its generalized projections in $\R^2$. We also show that the generalized profiles uniquely determine a large class of shapes. We present a solution method well suitable for adaptive optics images in particular, and discuss various choices of regularization functions.
We present an optimal strategy for the relative weighting of multiple data modalities in inverse problems, and derive the maximum compatibility estimate (MCE) that corresponds to the maximum likelihood or maximum a posteriori estimates in the case of a single data mode. MCE is not explicitly dependent on the noise levels, scale factors or numbers of data points of the complementary data modes, and can be determined without the mode weight parameters. We also discuss discontinuities in the solution estimates in multimodal inverse problems, and derive a corresponding self-consistency criterion. As a case study, we consider the problem of reconstructing the shape and the spin state of a body in $\R^3$ from the boundary curves (profiles) and volumes (brightness values) of its generalized projections in $\R^2$. We also show that the generalized profiles uniquely determine a large class of shapes. We present a solution method well suitable for adaptive optics images in particular, and discuss various choices of regularization functions.
2011, 5(1): 59-73
doi: 10.3934/ipi.2011.5.59
+[Abstract](2283)
+[PDF](371.0KB)
Abstract:
In this paper, we prove the global uniqueness of determining both the magnetic field and the electrical potential by boundary measurements in two-dimensional case. In other words, we prove the uniqueness of this inverse problem without any smallness assumption.
In this paper, we prove the global uniqueness of determining both the magnetic field and the electrical potential by boundary measurements in two-dimensional case. In other words, we prove the uniqueness of this inverse problem without any smallness assumption.
2011, 5(1): 75-93
doi: 10.3934/ipi.2011.5.75
+[Abstract](2302)
+[PDF](476.3KB)
Abstract:
It is common for example in Cryo-electron microscopy of viruses, that the orientations at which the projections are acquired, are totally unknown. We introduce here a moment based algorithm for recovering them in the three-dimensional parallel beam tomography. In this context, there is likely to be also unknown shifts in the projections. They will be estimated simultaneously. Also stability properties of the algorithm are examined. Our considerations rely on recent results that guarantee a solution to be almost always unique. A similar analysis can also be done in the two-dimensional problem.
It is common for example in Cryo-electron microscopy of viruses, that the orientations at which the projections are acquired, are totally unknown. We introduce here a moment based algorithm for recovering them in the three-dimensional parallel beam tomography. In this context, there is likely to be also unknown shifts in the projections. They will be estimated simultaneously. Also stability properties of the algorithm are examined. Our considerations rely on recent results that guarantee a solution to be almost always unique. A similar analysis can also be done in the two-dimensional problem.
2011, 5(1): 95-113
doi: 10.3934/ipi.2011.5.95
+[Abstract](2005)
+[PDF](828.6KB)
Abstract:
In this paper, we present an efficient algorithm for computing the Euclidean skeleton of an object directly from a point cloud representation on an underlying grid. The key point of this algorithm is to identify those grid points that are (approximately) on the skeleton using the closest point information of a grid point and its neighbors. The three main ingredients of the algorithm are: (1) computing closest point information efficiently on a grid, (2) identifying possible skeletal points based on the number of closest points of a grid point and its neighbors with smaller distances, (3) applying a distance ordered homotopic thinning process to remove the non-skeletal points while preserving the end points or the edge points of the skeleton. Computational examples in 2D and 3D are presented.
In this paper, we present an efficient algorithm for computing the Euclidean skeleton of an object directly from a point cloud representation on an underlying grid. The key point of this algorithm is to identify those grid points that are (approximately) on the skeleton using the closest point information of a grid point and its neighbors. The three main ingredients of the algorithm are: (1) computing closest point information efficiently on a grid, (2) identifying possible skeletal points based on the number of closest points of a grid point and its neighbors with smaller distances, (3) applying a distance ordered homotopic thinning process to remove the non-skeletal points while preserving the end points or the edge points of the skeleton. Computational examples in 2D and 3D are presented.
2011, 5(1): 115-136
doi: 10.3934/ipi.2011.5.115
+[Abstract](2423)
+[PDF](3458.6KB)
Abstract:
This note is devoted to a mathematical exploration of whether Lowe's Scale-Invariant Feature Transform (SIFT)[21], a very successful image matching method, is similarity invariant as claimed. It is proved that the method is scale invariant only if the initial image blurs are exactly guessed. Yet, even a large error on the initial blur is quickly attenuated by this multiscale method, when the scale of analysis increases. In consequence, its scale invariance is almost perfect. The mathematical arguments are given under the assumption that the Gaussian smoothing performed by SIFT gives an aliasing free sampling of the image evolution. The validity of this main assumption is confirmed by a rigorous experimental procedure, and by a mathematical proof. These results explain why SIFT outperforms all other image feature extraction methods when it comes to scale invariance.
This note is devoted to a mathematical exploration of whether Lowe's Scale-Invariant Feature Transform (SIFT)[21], a very successful image matching method, is similarity invariant as claimed. It is proved that the method is scale invariant only if the initial image blurs are exactly guessed. Yet, even a large error on the initial blur is quickly attenuated by this multiscale method, when the scale of analysis increases. In consequence, its scale invariance is almost perfect. The mathematical arguments are given under the assumption that the Gaussian smoothing performed by SIFT gives an aliasing free sampling of the image evolution. The validity of this main assumption is confirmed by a rigorous experimental procedure, and by a mathematical proof. These results explain why SIFT outperforms all other image feature extraction methods when it comes to scale invariance.
2011, 5(1): 137-166
doi: 10.3934/ipi.2011.5.137
+[Abstract](2983)
+[PDF](2287.4KB)
Abstract:
This paper presents a level-set based approach for the simultaneous reconstruction and segmentation of the activity as well as the density distribution from tomography data gathered by an integrated SPECT/CT scanner.
Activity and density distributions are modeled as piecewise constant functions. The segmenting contours and the corresponding function values of both the activity and the density distribution are found as minimizers of a Mumford-Shah like functional over the set of admissible contours and -- for fixed contours -- over the spaces of piecewise constant density and activity distributions which may be discontinuous across their corresponding contours. For the latter step a Newton method is used to solve the nonlinear optimality system. Shape sensitivity calculus is used to find a descent direction for the cost functional with respect to the geometrical variables which leads to an update formula for the contours in the level-set framework. A heuristic approach for the insertion of new components for the activity as well as the density function is used. The method is tested for synthetic data with different noise levels.
This paper presents a level-set based approach for the simultaneous reconstruction and segmentation of the activity as well as the density distribution from tomography data gathered by an integrated SPECT/CT scanner.
Activity and density distributions are modeled as piecewise constant functions. The segmenting contours and the corresponding function values of both the activity and the density distribution are found as minimizers of a Mumford-Shah like functional over the set of admissible contours and -- for fixed contours -- over the spaces of piecewise constant density and activity distributions which may be discontinuous across their corresponding contours. For the latter step a Newton method is used to solve the nonlinear optimality system. Shape sensitivity calculus is used to find a descent direction for the cost functional with respect to the geometrical variables which leads to an update formula for the contours in the level-set framework. A heuristic approach for the insertion of new components for the activity as well as the density function is used. The method is tested for synthetic data with different noise levels.
2011, 5(1): 167-184
doi: 10.3934/ipi.2011.5.167
+[Abstract](3131)
+[PDF](618.2KB)
Abstract:
We propose a new class of Gaussian priors, correlation priors. In contrast to some well-known smoothness priors, they have stationary covariances. The correlation priors are given in a parametric form with two parameters: correlation power and correlation length. The first parameter is connected with our prior information on the variance of the unknown. The second parameter is our prior belief on how fast the correlation of the unknown approaches zero. Roughly speaking, the correlation length is the distance beyond which two points of the unknown may be considered independent.
The prior distribution is constructed to be essentially independent of the discretization so that the a posteriori distribution will be essentially independent of the discretization grid. The covariance of a discrete correlation prior may be formed by combining the Fisher information of a discrete white noise and different-order difference priors. This is interpreted as a combination of virtual measurements of the unknown. Closed-form expressions for the continuous limits are calculated. Also, boundary correction terms for correlation priors on finite intervals are given.
A numerical example, deconvolution with a Gaussian kernel and a correlation prior, is computed.
We propose a new class of Gaussian priors, correlation priors. In contrast to some well-known smoothness priors, they have stationary covariances. The correlation priors are given in a parametric form with two parameters: correlation power and correlation length. The first parameter is connected with our prior information on the variance of the unknown. The second parameter is our prior belief on how fast the correlation of the unknown approaches zero. Roughly speaking, the correlation length is the distance beyond which two points of the unknown may be considered independent.
The prior distribution is constructed to be essentially independent of the discretization so that the a posteriori distribution will be essentially independent of the discretization grid. The covariance of a discrete correlation prior may be formed by combining the Fisher information of a discrete white noise and different-order difference priors. This is interpreted as a combination of virtual measurements of the unknown. Closed-form expressions for the continuous limits are calculated. Also, boundary correction terms for correlation priors on finite intervals are given.
A numerical example, deconvolution with a Gaussian kernel and a correlation prior, is computed.
2011, 5(1): 185-202
doi: 10.3934/ipi.2011.5.185
+[Abstract](3187)
+[PDF](177.8KB)
Abstract:
The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation.
The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation.
2011, 5(1): 203-217
doi: 10.3934/ipi.2011.5.203
+[Abstract](1958)
+[PDF](370.4KB)
Abstract:
In recent years considerable interest has been directed at devising obstacles which appear "invisible" to various types of wave propagation. That is; suppose for example we can construct an obstacle coated with an appropriate material such that when illuminated by an incident field (e.g plane wave) the wave scattered by the obstacle has zero cross section (equivalently radiation pattern)for all incident directions and frequencies then the obstacle appears to have no effect on the illuminating wave and the obstacle can be considered invisible. Such an electromagnetic cloaking device has been constructed by Schurig et. al [18]. Motivated by recent work [1] concerning the problem of a parallel flow of particles falling on a body with piecewise smooth boundary and leaving no trace the analogous problem of acoustic scattering of a plane wave illuminating the same body, a gateway, is considered. It is shown that at high frequencies, with use of the Kirchoff approximation and the geometrical theory of diffraction, the scattered far field in a range of observed directions, e. g the back scattered direction, is zero for a discrete set of wave numbers. That is the gateway is acoustically "invisible" in that direction.
In recent years considerable interest has been directed at devising obstacles which appear "invisible" to various types of wave propagation. That is; suppose for example we can construct an obstacle coated with an appropriate material such that when illuminated by an incident field (e.g plane wave) the wave scattered by the obstacle has zero cross section (equivalently radiation pattern)for all incident directions and frequencies then the obstacle appears to have no effect on the illuminating wave and the obstacle can be considered invisible. Such an electromagnetic cloaking device has been constructed by Schurig et. al [18]. Motivated by recent work [1] concerning the problem of a parallel flow of particles falling on a body with piecewise smooth boundary and leaving no trace the analogous problem of acoustic scattering of a plane wave illuminating the same body, a gateway, is considered. It is shown that at high frequencies, with use of the Kirchoff approximation and the geometrical theory of diffraction, the scattered far field in a range of observed directions, e. g the back scattered direction, is zero for a discrete set of wave numbers. That is the gateway is acoustically "invisible" in that direction.
2011, 5(1): 219-236
doi: 10.3934/ipi.2011.5.219
+[Abstract](2457)
+[PDF](394.0KB)
Abstract:
We consider the problem of minimizing the functional $\int_\Omega a|\nabla u|dx$, with $u$ in some appropriate Banach space and prescribed trace $f$ on the boundary. For $a\in L^2(\Omega)$ and $u$ in the sample space $H^1(\Omega)$, this problem appeared recently in imaging the electrical conductivity of a body when some interior data are available. When $a\in C(\Omega)\cap L^\infty(\Omega)$, the functional has a natural interpretation, which suggests that one should consider the minimization problem in the sample space $BV(\Omega)$. We show the stability of the minimum value with respect to $a$, in a neighborhood of a particular coefficient. In both cases the method of proof provides some convergent minimizing procedures. We also consider the minimization problem for the non-degenerate functional $\int_\Omega a\max\{|\nabla u|,\delta\}dx$, for some $\delta>0$, and prove a stability result. Again, the method of proof constructs a minimizing sequence and we identify sufficient conditions for convergence. We apply the last result to the conductivity problem and show that, under an a posteriori smoothness condition, the method recovers the unknown conductivity.
We consider the problem of minimizing the functional $\int_\Omega a|\nabla u|dx$, with $u$ in some appropriate Banach space and prescribed trace $f$ on the boundary. For $a\in L^2(\Omega)$ and $u$ in the sample space $H^1(\Omega)$, this problem appeared recently in imaging the electrical conductivity of a body when some interior data are available. When $a\in C(\Omega)\cap L^\infty(\Omega)$, the functional has a natural interpretation, which suggests that one should consider the minimization problem in the sample space $BV(\Omega)$. We show the stability of the minimum value with respect to $a$, in a neighborhood of a particular coefficient. In both cases the method of proof provides some convergent minimizing procedures. We also consider the minimization problem for the non-degenerate functional $\int_\Omega a\max\{|\nabla u|,\delta\}dx$, for some $\delta>0$, and prove a stability result. Again, the method of proof constructs a minimizing sequence and we identify sufficient conditions for convergence. We apply the last result to the conductivity problem and show that, under an a posteriori smoothness condition, the method recovers the unknown conductivity.
2011, 5(1): 237-261
doi: 10.3934/ipi.2011.5.237
+[Abstract](5669)
+[PDF](2454.5KB)
Abstract:
Recently augmented Lagrangian method has been successfully applied to image restoration. We extend the method to total variation (TV) restoration models with non-quadratic fidelities. We will first introduce the method and present an iterative algorithm for TV restoration with a quite general fidelity. In each iteration, three sub-problems need to be solved, two of which can be very efficiently solved via Fast Fourier Transform (FFT) implementation or closed form solution. In general the third sub-problem need iterative solvers. We then apply our method to TV restoration with $L^1$ and Kullback-Leibler (KL) fidelities, two common and important data terms for deblurring images corrupted by impulsive noise and Poisson noise, respectively. For these typical fidelities, we show that the third sub-problem also has closed form solution and thus can be efficiently solved. In addition, convergence analysis of these algorithms are given. Numerical experiments demonstrate the efficiency of our method.
Recently augmented Lagrangian method has been successfully applied to image restoration. We extend the method to total variation (TV) restoration models with non-quadratic fidelities. We will first introduce the method and present an iterative algorithm for TV restoration with a quite general fidelity. In each iteration, three sub-problems need to be solved, two of which can be very efficiently solved via Fast Fourier Transform (FFT) implementation or closed form solution. In general the third sub-problem need iterative solvers. We then apply our method to TV restoration with $L^1$ and Kullback-Leibler (KL) fidelities, two common and important data terms for deblurring images corrupted by impulsive noise and Poisson noise, respectively. For these typical fidelities, we show that the third sub-problem also has closed form solution and thus can be efficiently solved. In addition, convergence analysis of these algorithms are given. Numerical experiments demonstrate the efficiency of our method.
2011, 5(1): 263-284
doi: 10.3934/ipi.2011.5.263
+[Abstract](2465)
+[PDF](668.0KB)
Abstract:
We propose an inviscid model for nonrigid image registration in a particle framework, and derive the corresponding nonlinear partial differential equations for computing the spatial transformation. Our idea is to simulate the template image as a set of free particles moving toward the target positions under applied forces. Our model can accommodate both small and large deformations, with sharper edges and clear texture achieved at less computational cost. We demonstrate the performance of our model on a variety of images including 2D and 3D, mono-modal and multi-modal images.
We propose an inviscid model for nonrigid image registration in a particle framework, and derive the corresponding nonlinear partial differential equations for computing the spatial transformation. Our idea is to simulate the template image as a set of free particles moving toward the target positions under applied forces. Our model can accommodate both small and large deformations, with sharper edges and clear texture achieved at less computational cost. We demonstrate the performance of our model on a variety of images including 2D and 3D, mono-modal and multi-modal images.
2019 Impact Factor: 1.373
Readers
Authors
Editors
Referees
Librarians
Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]