
ISSN:
1930-8337
eISSN:
1930-8345
All Issues
Inverse Problems & Imaging
October 2018 , Volume 12 , Issue 5
Select all articles
Export/Reference:
This paper proposes a stable numerical implementation of the Navier-Stokes equations for fluid image registration, based on a finite volume scheme. Although fluid registration methods have succeeded in handling large deformations in various applications, they still suffer from perturbed solutions due to the choice of the numerical implementation. Thus, a robust numerical scheme in the optimization step is required to enhance the quality of the registration. A key challenge is the use of a finite volume-based scheme, since we have to deal with a hyperbolic equation type. We propose the classical Patankar scheme based on pressure correction, which is called Semi-Implicit Method for Pressure-Linked Equation (SIMPLE). The performance of the proposed algorithm was tested on magnetic resonance images of the human brain and hands, and compared with the classical implementation of the fluid image registration [
Gaussian random fields over infinite-dimensional Hilbert spaces require the definition of appropriate covariance operators. The use of elliptic PDE operators to construct covariance operators allows to build on fast PDE solvers for manipulations with the resulting covariance and precision operators. However, PDE operators require a choice of boundary conditions, and this choice can have a strong and usually undesired influence on the Gaussian random field. We propose two techniques that allow to ameliorate these boundary effects for large-scale problems. The first approach combines the elliptic PDE operator with a Robin boundary condition, where a varying Robin coefficient is computed from an optimization problem. The second approach normalizes the pointwise variance by rescaling the covariance operator. These approaches can be used individually or can be combined. We study properties of these approaches, and discuss their computational complexity. The performance of our approaches is studied for random fields defined over simple and complex two- and three-dimensional domains.
The regularization approach is used widely in image restoration problems. The visual quality of the restored image depends highly on the regularization parameter. In this paper, we develop an automatic way to choose a good regularization parameter for total variation (TV) image restoration problems. It is based on the generalized cross validation (GCV) approach and hence no knowledge of noise variance is required. Due to the lack of the closed-form solution of the TV regularization problem, difficulty arises in finding the minimizer of the GCV function directly. We reformulate the TV regularization problem as a minimax problem and then apply a first-order primal-dual method to solve it. The primal subproblem is rearranged so that it becomes a special Tikhonov regularization problem for which the minimizer of the GCV function is readily computable. Hence we can determine the best regularization parameter in each iteration of the primal-dual method. The regularization parameter for the original TV regularization problem is then obtained by an averaging scheme. In essence, our method needs only to solve the TV regulation problem twice: one to determine the regularization parameter and one to restore the image with that parameter. Numerical results show that our method gives near optimal parameter, and excellent performance when compared with other state-of-the-art adaptive image restoration algorithms.
This paper discusses the properties of certain risk estimators that recently regained popularity for choosing regularization parameters in ill-posed problems, in particular for sparsity regularization. They apply Stein's unbiased risk estimator (SURE) to estimate the risk in either the space of the unknown variables or in the data space. We will call the latter PSURE in order to distinguish the two different risk functions. It seems intuitive that SURE is more appropriate for ill-posed problems, since the properties in the data space do not tell much about the quality of the reconstruction. We provide theoretical studies of both approaches for linear Tikhonov regularization in a finite dimensional setting and estimate the quality of the risk estimators, which also leads to asymptotic convergence results as the dimension of the problem tends to infinity. Unlike previous works which studied single realizations of image processing problems with a very low degree of ill-posedness, we are interested in the statistical behaviour of the risk estimators for increasing ill-posedness. Interestingly, our theoretical results indicate that the quality of the SURE risk can deteriorate asymptotically for ill-posed problems, which is confirmed by an extensive numerical study. The latter shows that in many cases the SURE estimator leads to extremely small regularization parameters, which obviously cannot stabilize the reconstruction. Similar but less severe issues with respect to robustness also appear for the PSURE estimator, which in comparison to the rather conservative discrepancy principle leads to the conclusion that regularization parameter choice based on unbiased risk estimation is not a reliable procedure for ill-posed problems. A similar numerical study for sparsity regularization demonstrates that the same issue appears in non-linear variational regularization approaches.
Reconstruction of the seismic wavefield from sub-sampled data is an important problem in seismic image processing, this is partly due to limitations of the observations which usually yield incomplete data. In essence, this is an ill-posed inverse problem. To solve the ill-posed problem, different kinds of regularization technique can be applied. In this paper, we consider a novel regularization model, called the
An inverse obstacle problem for the wave equation in a two layered medium is considered. It is assumed that the unknown obstacle is penetrable and embedded in the lower half-space. The wave as a solution of the wave equation is generated by an initial data whose support is in the upper half-space and observed at the same place as the support over a finite time interval. From the observed wave an indicator function in the time domain enclosure method is constructed. It is shown that, one can find some information about the geometry of the obstacle together with the qualitative property in the asymptotic behavior of the indicator function.
Retinex theory deals with compensation for illumination effects in images, which has a number of applications including Retinex illusion, medical image intensity inhomogeneity and color image shadow effect etc.. Such ill-posed problem has been studied by researchers for decades. However, most exiting methods paid little attention to the noises contained in the images and lost effectiveness when the noises increase. The main aim of this paper is to present a general Retinex model to effectively and robustly restore images degenerated by both illusion and noises. We propose a novel variational model by incorporating appropriate regularization technique for the reflectance component and illumination component accordingly. Although the proposed model is non-convex, we prove the existence of the minimizers theoretically. Furthermore, we design a fast and efficient alternating minimization algorithm for the proposed model, where all subproblems have the closed-form solutions. Applications of the algorithm to various gray images and color images with noises of different distributions yield promising results.
The composite
We study the X-ray transform
2019 Impact Factor: 1.373
Readers
Authors
Editors
Referees
Librarians
Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]