ISSN:

1930-8337

eISSN:

1930-8345

All Issues

## Inverse Problems and Imaging

Open Access Articles

*+*[Abstract](259)

*+*[HTML](181)

*+*[PDF](499.54KB)

**Abstract:**

This paper is concerned with a new type of inverse obstacle problem governed by a variable-order time-fraction diffusion equation in a bounded domain. The unknown obstacle is a region where the space dependent variable-order of fractional time derivative of the governing equation deviates from a known homogeneous background one. The observation data is given by the Neumann data of the solution of the governing equation for a specially designed Dirichlet data. Under a suitable jump condition on the deviation, it is shown that the most recent version of *the time domain enclosure method* enables one to extract information about the geometry of the obstacle and a qualitative nature of the jump, from the observation data.

*+*[Abstract](408)

*+*[HTML](154)

*+*[PDF](536.61KB)

**Abstract:**

We consider the unique determinations of impenetrable obstacles or diffraction grating profiles in

*+*[Abstract](766)

*+*[HTML](136)

*+*[PDF](3459.17KB)

**Abstract:**

Image registration has been widely studied over the past several decades, with numerous applications in science, engineering and medicine. Most of the conventional mathematical models for large deformation image registration rely on prescribed landmarks, which usually require tedious manual labeling. In recent years, there has been a surge of interest in the use of machine learning for image registration. In this paper, we develop a novel method for large deformation image registration by a fusion of quasiconformal theory and convolutional neural network (CNN). More specifically, we propose a quasiconformal energy model with a novel fidelity term that incorporates the features extracted using a pre-trained CNN, thereby allowing us to obtain meaningful registration results without any guidance of prescribed landmarks. Moreover, unlike many prior image registration methods, the bijectivity of our method is guaranteed by quasiconformal theory. Experimental results are presented to demonstrate the effectiveness of the proposed method. More broadly, our work sheds light on how rigorous mathematical theories and practical machine learning approaches can be integrated for developing computational methods with improved performance.

*+*[Abstract](604)

*+*[HTML](281)

*+*[PDF](2897.95KB)

**Abstract:**

In recent years multi-modal data processing methods have gained considerable research interest as technological advancements in imaging, computing, and data storage have made the collection of redundant, multi-modal data more commonplace. In this work we present an image restoration method tailored for scenarios where pre-existing, high-quality images from different modalities or contrasts are available in addition to the target image. Our method is based on a novel network architecture which combines the benefits of traditional multi-scale signal representation, such as wavelets, with more recent concepts from data fusion methods. Results from numerical simulations in which T1-weighted MRI images are used to restore noisy and undersampled T2-weighted images demonstrate that the proposed network successfully utilizes information from high-quality reference images to improve the restoration quality of the target image beyond that of existing popular methods.

*+*[Abstract](1180)

*+*[HTML](342)

*+*[PDF](16064.83KB)

**Abstract:**

A primary interest in dynamic inverse problems is to identify the underlying temporal behaviour of the system from outside measurements. In this work, we consider the case, where the target can be represented by a decomposition of spatial and temporal basis functions and hence can be efficiently represented by a low-rank decomposition. We then propose a joint reconstruction and low-rank decomposition method based on the Nonnegative Matrix Factorisation to obtain the unknown from highly undersampled dynamic measurement data. The proposed framework allows for flexible incorporation of separate regularisers for spatial and temporal features. For the special case of a stationary operator, we can effectively use the decomposition to reduce the computational complexity and obtain a substantial speed-up. The proposed methods are evaluated for three simulated phantoms and we compare the obtained results to a separate low-rank reconstruction and subsequent decomposition approach based on the widely used principal component analysis.

*+*[Abstract](1715)

*+*[HTML](534)

*+*[PDF](1601.74KB)

**Abstract:**

Limited-angle reconstruction is a very important but challenging problem in the field of computed tomography (CT) which has been extensively studied for many years. However, some difficulties still remain. Based on the theory of visible and invisible boundary developed by Quinto et.al, we propose a reconstruction model for limited-angle CT, which encodes the visible edges as priors to recover the invisible ones. The new model utilizes generalized shrinkage operators as regularizers to perform edge-preserving smoothing such that the visible edges are employed as anchors to recover piecewise-constant or piecewise-smooth reconstructions, while noises and artifacts are suppressed or removed. This work extends our previous research on limited-angle reconstruction which employs gradient

*+*[Abstract](1100)

*+*[HTML](469)

*+*[PDF](773.72KB)

**Abstract:**

In this paper, we propose a novel scheme for single image super resolution (SR) reconstruction. Firstly, we construct a new self-similarity framework by regarding the low resolution (LR) images as the low rank version of corresponding high resolution (HR) images. Subsequently, nuclear norm minimization (NNM) is employed to generate LR image pyramids from HR ones. The structure of our framework is beneficial to extract LR features, where we regard the quotient image, calculated between HR image and LR image at the same layer, as LR feature. This LR feature has the same dimension as LR image; however the dimension of commonly used gradient feature is 4 times than LR image. On the other hand, we employ nonlocal similar patch, within the same scale and across different scales, to generate HR and LR dictionaries. In the course of encoding, codes are calculated from both row and column of LR dictionary for each LR patch; at the same time, both low rank and sparse constraints on codes matrix give us a hand to remove coding noises. Finally, both quantitative and perceptual results demonstrate that our proposed method has a good SR performance.

*+*[Abstract](1474)

*+*[HTML](724)

*+*[PDF](10243.11KB)

**Abstract:**

In this paper, we mainly show a novel fast fractional order anisotropic diffusion algorithm for noise removal based on the recent numerical scheme called the Fast Explicit Diffusion. To balance the efficiency and accuracy of the algorithm, the truncated matrix method is used to deal with the iterative matrix in the model and its error is also estimated. In particular, we obtain the stability condition of the iteration by the spectrum analysis method. Through implementing the fast explicit format iteration algorithm with periodic change of time step size, the efficiency of the algorithm is greatly improved. At last, we show some numerical results on denoising tasks. Many experimental results confirm that the algorithm can more quickly achieve satisfactory denoising results.

*+*[Abstract](1576)

*+*[HTML](345)

*+*[PDF](1282.76KB)

**Abstract:**

We develop new efficient algorithms for a class of inverse problems of gravimetry to recover an anomalous volume mass distribution (measure) in the sense that we design fast local level-set methods to simultaneously reconstruct both unknown domain and varying density of the anomalous measure from modulus of gravity force rather than from gravity force itself. The equivalent-source principle of gravitational potential forces us to consider only measures of the form

*+*[Abstract](1538)

*+*[HTML](713)

*+*[PDF](7835.33KB)

**Abstract:**

Existing image restoration methods mostly make full use of various image prior information. However, they rarely exploit the potential of residual histograms, especially their role as ensemble regularization constraint. In this paper, we propose a residual Wasserstein regularization model (RWRM), in which a residual histogram constraint is subtly embedded into a type of variational minimization problems. Specifically, utilizing the Wasserstein distance from the optimal transport theory, this scheme is achieved by enforcing the observed image residual histogram as close as possible to the reference residual histogram. Furthermore, the RWRM unifies the residual Wasserstein regularization and image prior regularization to improve image restoration performance. The robustness of parameter selection in the RWRM makes the proposed algorithms easier to implement. Finally, extensive experiments have confirmed that our RWRM applied to Gaussian denoising and non-blind deconvolution is effective.

*+*[Abstract](1432)

*+*[HTML](700)

*+*[PDF](6147.66KB)

**Abstract:**

For the characteristics of the degraded images with multiplicative noise, the gray level indicators for constructing adaptive total variation are proposed. Based on the new regularization term, we propose the new convex adaptive variational model. Then, considering the existence, uniqueness and comparison principle of the minimizer of the functional. The finite difference method with rescaling technique and the primal-dual method with adaptive step size are used to solve the minimization problem. The paper ends with a report on numerical tests for the denoising of images subject to multiplicative noise, the comparison with other methods is provided as well.

*+*[Abstract](3294)

*+*[HTML](1443)

*+*[PDF](2745.15KB)

**Abstract:**

X-ray images of the lower limb bone are the most commonly used imaging modality for clinical studies, and segmentation of the femur and tibia in an X-ray image is helpful for many medical studies such as diagnosis, surgery and treatment. In this paper, we propose a new approach based on pure dilated residual U-Net for the segmentation of the femur and tibia bones. The proposed approach employs dilated convolution completely to increase the receptive field, in this way, we can make full use of the advantages of dilated convolution. We conducted experiments and evaluations on datasets provided by Tianjin hospital. Comparison with the classical U-net and FusionNet, our method has fewer parameters, higher accuracy, and converges more rapidly, which means the high performance of the proposed method.

*+*[Abstract](1869)

*+*[HTML](1087)

*+*[PDF](5013.13KB)

**Abstract:**

This paper proposes to learn analysis transform network for dynamic magnetic resonance imaging (LANTERN). Integrating the strength of CS-MRI and deep learning, the proposed framework is highlighted in three components: (ⅰ) The spatial and temporal domains are sparsely constrained by adaptively trained convolutional filters; (ⅱ) We introduce an end-to-end framework to learn the parameters in LANTERN to solve the difficulty of parameter selection in traditional methods; (ⅲ) Compared to existing deep learning reconstruction methods, our experimental results show that our paper has encouraging capability in exploiting the spatial and temporal redundancy of dynamic MR images. We performed quantitative and qualitative analysis of cardiac reconstructions at different acceleration factors (

*+*[Abstract](2611)

*+*[HTML](914)

*+*[PDF](11965.85KB)

**Abstract:**

Retinex theory is introduced to show how the human visual system perceives the color and the illumination effect such as Retinex illusions, medical image intensity inhomogeneity and color shadow effect etc.. Many researchers have studied this ill-posed problem based on the framework of the variation energy functional for decades. However, to the best of our knowledge, the existing models via the sparsity of the image based on the nonconvex

*+*[Abstract](2166)

*+*[HTML](772)

*+*[PDF](4143.61KB)

**Abstract:**

Multi-energy CT takes advantage of the non-linearly varying attenuation properties of elemental media with respect to energy, enabling more precise material identification than single-energy CT. The increased precision comes with the cost of a higher radiation dose. A straightforward way to lower the dose is to reduce the number of projections per energy, but this makes tomographic reconstruction more ill-posed. In this paper, we propose how this problem can be overcome with a combination of a regularization method that promotes structural similarity between images at different energies and a suitably selected low-dose data acquisition protocol using non-overlapping projections. The performance of various joint regularization models is assessed with both simulated and experimental data, using the novel low-dose data acquisition protocol. Three of the models are well-established, namely the joint total variation, the linear parallel level sets and the spectral smoothness promoting regularization models. Furthermore, one new joint regularization model is introduced for multi-energy CT: a regularization based on the structure function from the structural similarity index. The findings show that joint regularization outperforms individual channel-by-channel reconstruction. Furthermore, the proposed combination of joint reconstruction and non-overlapping projection geometry enables significant reduction of radiation dose.

*+*[Abstract](1789)

*+*[HTML](765)

*+*[PDF](2820.82KB)

**Abstract:**

Low rank approximation has been extensively studied in the past. It is most suitable to reproduce rectangular like structures in the data. In this work we introduce a generalization using "shifted" rank-

These kind of shifts naturally appear in applications, where an object

The main difficulty of the above stated problem lies in finding a suitable shift vector

We validate our approach in several numerical experiments on different kinds of data. We compare the technique to shift-invariant dictionary learning algorithms. Furthermore, we provide examples from application including object segmentation in non-destructive testing and seismic exploration as well as object tracking in video processing.

*+*[Abstract](2333)

*+*[HTML](760)

*+*[PDF](321.87KB)

**Abstract:**

An inverse obstacle problem for the wave governed by the wave equation in a two layered medium is considered under the framework of the time domain enclosure method. The wave is generated by an initial data supported on a closed ball in the upper half-space, and observed on the same ball over a finite time interval. The unknown obstacle is penetrable and embedded in the lower half-space. It is assumed that the propagation speed of the wave in the upper half-space is greater than that of the wave in the lower half-space, which is excluded in the previous study: Ikehata and Kawashita, Inverse Problems and Imaging **12** (2018), no.5, 1173-1198. In the present case, when the reflected waves from the obstacle enter the upper half-space, the total reflection phenomena occur, which give singularities to the integral representation of the fundamental solution for the reduced transmission problem in the background medium. This fact makes the problem more complicated. However, it is shown that these waves do not have any influence on the leading profile of the indicator function of the time domain enclosure method.

*+*[Abstract](3431)

*+*[HTML](930)

*+*[PDF](497.96KB)

**Abstract:**

An inverse obstacle scattering problem for the electromagnetic wave governed by the Maxwell system over a finite time interval is considered. It is assumed that the wave satisfies the Leontovich boundary condition on the surface of an unknown obstacle. The condition is described by using an unknown positive function on the surface of the obstacle which is called the surface admittance. The wave is generated at the initial time by a volumetric current source supported on a very small ball placed outside the obstacle and only the electric component of the wave is observed on the same ball over a finite time interval. It is shown that from the observed data one can extract information about the value of the surface admittance and the curvatures at the points on the surface nearest to the center of the ball. This shows that a single shot contains a meaningful information about the quantitative state of the surface of the obstacle.

*+*[Abstract](6273)

*+*[HTML](1518)

*+*[PDF](1075.82KB)

**Abstract:**

EIT is a non-linear ill-posed inverse problem which requires sophisticated regularisation techniques to achieve good results. In this paper we consider the use of structural information in the form of edge directions coming from an auxiliary image of the same object being reconstructed. In order to allow for cases where the auxiliary image does not provide complete information we consider in addition a sparsity regularization for the edges appearing in the EIT image. The combination of these approaches is conveniently described through the parallel level sets approach. We present an overview of previous methods for structural regularisation and then provide a variational setting for our approach and explain the numerical implementation. We present results on simulations and experimental data for different cases with accurate and inaccurate prior information. The results demonstrate that the structural prior information improves the reconstruction accuracy, even in cases when there is reasonable uncertainty in the prior about the location of the edges or only partial edge information is available.

2021
Impact Factor: 1.483

5 Year Impact Factor: 1.462

2021 CiteScore: 2.6

## Readers

## Authors

## Editors

## Referees

## Librarians

## Email Alert

Add your name and e-mail address to receive news of forthcoming issues of this journal:

[Back to Top]