All Issues

Volume 16, 2022

Volume 15, 2021

Volume 14, 2020

Volume 13, 2019

Volume 12, 2018

Volume 11, 2017

Volume 10, 2016

Volume 9, 2015

Volume 8, 2014

Volume 7, 2013

Volume 6, 2012

Volume 5, 2011

Volume 4, 2010

Volume 3, 2009

Volume 2, 2008

Volume 1, 2007

Inverse Problems and Imaging

February 2016 , Volume 10 , Issue 1

Select all articles


On the choice of the Tikhonov regularization parameter and the discretization level: A discrepancy-based strategy
Vinicius Albani, Adriano De Cezaro and Jorge P. Zubelli
2016, 10(1): 1-25 doi: 10.3934/ipi.2016.10.1 +[Abstract](4678) +[PDF](817.8KB)
We address the classical issue of appropriate choice of the regularization and discretization level for the Tikhonov regularization of an inverse problem with imperfectly measured data. We focus on the fact that the proper choice of the discretization level in the domain together with the regularization parameter is a key feature in adequate regularization. We propose a discrepancy-based choice for these quantities by applying a relaxed version of Morozov's discrepancy principle. Indeed, we prove the existence of the discretization level and the regularization parameter satisfying such discrepancy. We also prove associated regularizing properties concerning the Tikhonov minimizers. We conclude by presenting some numerical examples of interest.
A fractional-order derivative based variational framework for image denoising
Fangfang Dong and Yunmei Chen
2016, 10(1): 27-50 doi: 10.3934/ipi.2016.10.27 +[Abstract](4616) +[PDF](2186.0KB)
In this paper, we propose a unified variational framework for noise removal, which uses a combination of different orders of fractional derivatives in the regularization term of the objective function. The principle of the combination is taking the order two or higher derivatives for smoothing the homogeneous regions, and a fractional order less than or equal to one to smooth the locations near the edges. We also introduce a novel edge detector to better detect edges and textures. A main advantage of this framework is the superiority in dealing with textures and repetitive structures as well as eliminating the staircase effect. To effectively solve the proposed model, we extend the first-order primal dual algorithm to minimize a functional involving fractional-order derivatives. A set of experiments demonstrates that the proposed method is able to avoid the staircase effect and preserve accurately edges and structural details of the image while removing the noise.
The topological gradient method for semi-linear problems and application to edge detection and noise removal
Audric Drogoul and Gilles Aubert
2016, 10(1): 51-86 doi: 10.3934/ipi.2016.10.51 +[Abstract](2897) +[PDF](3537.7KB)
The goal of this paper is to apply the topological gradient method to edge detection and noise removal for images degraded by various noises and blurs. First applied to edge detection for images degraded by a Gaussian noise, we propose here to extend the method to blurred images contaminated either by a multiplicative noise of gamma law or to blurred Poissonian images. We compute, both for perforated and cracked domains, the topological gradient for each noise model. Then we present an edge detection/restoration algorithm based on this notion and we apply it to the two degradation models previously described. We compare our method with other classical variational approaches such that the Mumford-Shah and TV restoration models and with the classical Canny edge detector. Some experimental results showing the efficiency, the robustness and the rapidity of the method are presented.
Common midpoint versus common offset acquisition geometry in seismic imaging
Raluca Felea, Venkateswaran P. Krishnan, Clifford J. Nolan and Eric Todd Quinto
2016, 10(1): 87-102 doi: 10.3934/ipi.2016.10.87 +[Abstract](2863) +[PDF](463.5KB)
We compare and contrast the qualitative nature of backprojected images obtained in seismic imaging when common offset data are used versus when common midpoint data are used. Our results show that the image obtained using common midpoint data contains artifacts which are not present with common offset data. Although there are situations where one would still want to use common midpoint data, this result points out a shortcoming that should be kept in mind when interpreting the images.
Factorization method in inverse interaction problems with bi-periodic interfaces between acoustic and elastic waves
Guanghui Hu, Andreas Kirsch and Tao Yin
2016, 10(1): 103-129 doi: 10.3934/ipi.2016.10.103 +[Abstract](3134) +[PDF](2457.4KB)
Consider a time-harmonic acoustic wave incident onto a doubly periodic (biperiodic) interface from a homogeneous compressible inviscid fluid. The region below the interface is supposed to be an isotropic linearly elastic solid. This paper is concerned with the inverse fluid-solid interaction (FSI) problem of recovering the unbounded periodic interface separating the fluid and solid. We provide a theoretical justification of the factorization method for precisely characterizing the region occupied by the elastic solid by utilizing the scattered acoustic waves measured in the fluid. A computational criterion and a uniqueness result are presented with infinitely many incident acoustic waves having common quasiperiodicity parameters. Numerical examples in 2D are demonstrated to show the validity and accuracy of the inversion algorithm.
The enclosure method for inverse obstacle scattering using a single electromagnetic wave in time domain
Masaru Ikehata
2016, 10(1): 131-163 doi: 10.3934/ipi.2016.10.131 +[Abstract](3377) +[PDF](574.5KB)
In this paper, a time domain enclosure method for an inverse obstacle scattering problem of electromagnetic wave is introduced. The wave as a solution of Maxwell's equations is generated by an applied volumetric current having an orientation and supported outside an unknown obstacle and observed on the same support over a finite time interval. It is assumed that the obstacle is a perfect conductor. Two types of analytical formulae which employ a single observed wave and explicitly contain information about the geometry of the obstacle are given. In particular, an effect of the orientation of the current is catched in one of two formulae. Two corollaries concerning with the detection of the points on the surface of the obstacle nearest to the centre of the current support and curvatures at the points are also given.
A divide-alternate-and-conquer approach for localization and shape identification of multiple scatterers in heterogeneous media using dynamic XFEM
Jaedal Jung and Ertugrul Taciroglu
2016, 10(1): 165-193 doi: 10.3934/ipi.2016.10.165 +[Abstract](3181) +[PDF](1436.7KB)
A numerical method for localization and identification of multiple arbitrarily-shaped scatterers (cracks, voids, or inclusions) embedded within heterogeneous linear elastic media is described. An elastodynamic implementation of the extended finite element method (XFEM), which is endowed with a spline-based parameterization of the scatterer boundaries, is employed to solve the forward (wave propagation) problem. This particular combination enables direct, sensitivity-based, and computationally efficient manipulation of the scatterers' boundaries over a stationary background mesh during the inversion process. The inverse problem is cast as a formal optimization problem whereby the discrepancy between the measured wave responses and those from the estimated scatterers is minimized. The solution is achieved through a gradient-based procedure that is steered by a divide-alternate-and-conquer strategy. The divide-and-conquer segment of the search algorithm seeks the global minimizer among potentially multiple solutions, whereas the alternate-and-conquer segment adaptively refines the shapes of identified scatterers. The results of several synthetic experiments with various types of scatterers are provided. These experiments verify the overall approach, and demonstrate that it is robust, accurate, and effective even at high levels of measurement noise.
Preconditioned conjugate gradient method for boundary artifact-free image deblurring
Nam-Yong Lee and Bradley J. Lucier
2016, 10(1): 195-225 doi: 10.3934/ipi.2016.10.195 +[Abstract](3362) +[PDF](4025.7KB)
Several methods have been proposed to reduce boundary artifacts in image deblurring. Some of those methods impose certain assumptions on image pixels outside the field-of-view; the most important of these assume reflective or anti-reflective boundary conditions. Boundary condition methods, including reflective and anti-reflective ones, however, often fail to reduce boundary artifacts, and, in some cases, generate their own artifacts, especially when the image to be deblurred does not accurately satisfy the imposed condition. To overcome these difficulties, we suggest using free boundary conditions, which do not impose any restrictions on image pixels outside the field-of-view, and preconditioned conjugate gradient methods, where preconditioners are designed to compensate for the non-uniformity in contributions from image pixels to the observation. Our simulation studies show that the proposed method outperforms reflective and anti-reflective boundary condition methods in removing boundary artifacts. The simulation studies also show that the proposed method can be applicable to arbitrarily shaped images and has the benefit of recovering damaged parts in blurred images.
Approximate marginalization of absorption and scattering in fluorescence diffuse optical tomography
Meghdoot Mozumder, Tanja Tarvainen, Simon Arridge, Jari P. Kaipio, Cosimo D'Andrea and Ville Kolehmainen
2016, 10(1): 227-246 doi: 10.3934/ipi.2016.10.227 +[Abstract](3216) +[PDF](573.9KB)
In fluorescence diffuse optical tomography (fDOT), the reconstruction of the fluorophore concentration inside the target body is usually carried out using a normalized Born approximation model where the measured fluorescent emission data is scaled by measured excitation data. One of the benefits of the model is that it can tolerate inaccuracy in the absorption and scattering distributions that are used in the construction of the forward model to some extent. In this paper, we employ the recently proposed Bayesian approximation error approach to fDOT for compensating for the modeling errors caused by the inaccurately known optical properties of the target in combination with the normalized Born approximation model. The approach is evaluated using a simulated test case with different amount of error in the optical properties. The results show that the Bayesian approximation error approach improves the tolerance of fDOT imaging against modeling errors caused by inaccurately known absorption and scattering of the target.
A partial data result for less regular conductivities in admissible geometries
Casey Rodriguez
2016, 10(1): 247-262 doi: 10.3934/ipi.2016.10.247 +[Abstract](2105) +[PDF](384.0KB)
We consider the Calderón problem with partial data in certain admissible geometries, that is, on compact Riemannian manifolds with boundary which are conformally embedded in a product of the Euclidean line and a simple manifold. We show that measuring the Dirichlet--to--Neumann map on roughly half of the boundary determines a conductivity that has essentially 3/2 derivatives. As a corollary, we strengthen a partial data result due to Kenig, Sjöstrand, and Uhlmann.
The factorization method for a partially coated cavity in inverse scattering
Qinghua Wu and Guozheng Yan
2016, 10(1): 263-279 doi: 10.3934/ipi.2016.10.263 +[Abstract](2995) +[PDF](501.9KB)
We consider the interior inverse scattering problem of recovering the shape of an impenetrable partially coated cavity. The scattered fields incited by point source waves are measured on a closed curve inside the cavity. We prove the validity of the factorization method for reconstructing the shape of the cavity. However, we are not able to apply the basic theorem introduced by Kirsch and Grinberg to treat the key operator directly, and some auxiliary operators have to be considered. In this paper, we provide theoretical validation of the factorization method to the problem, and some numerical results are presented to show the viability of our method.

2021 Impact Factor: 1.483
5 Year Impact Factor: 1.462
2021 CiteScore: 2.6




Email Alert

[Back to Top]