# American Institute of Mathematical Sciences

doi: 10.3934/ipi.2020062

## Two new non-negativity preserving iterative regularization methods for ill-posed inverse problems

 1 School of Mathematics and Statistics, Beijing Institute of Technology, 100081 Beijing, China 2 Shenzhen MSU-BIT University, 518172 Shenzhen, China 3 Faculty of Mathematics, Chemnitz University of Technology, Reichenhainer Str. 39/41, 09107 Chemnitz, Germany

* Corresponding author: hofmannb@mathematik.tu-chemnitz.de

Received  February 2020 Revised  July 2020 Published  October 2020

Many inverse problems are concerned with the estimation of non-negative parameter functions. In this paper, in order to obtain non-negative stable approximate solutions to ill-posed linear operator equations in a Hilbert space setting, we develop two novel non-negativity preserving iterative regularization methods. They are based on fixed point iterations in combination with preconditioning ideas. In contrast to the projected Landweber iteration, for which only weak convergence can be shown for the regularized solution when the noise level tends to zero, the introduced regularization methods exhibit strong convergence. There are presented convergence results, even for a combination of noisy right-hand side and imperfect forward operators, and for one of the approaches there are also convergence rates results. Specifically adapted discrepancy principles are used as a posteriori stopping rules of the established iterative regularization algorithms. For an application of the suggested new approaches, we consider a biosensor problem, which is modelled as a two dimensional linear Fredholm integral equation of the first kind. Several numerical examples, as well as a comparison with the projected Landweber method, are presented to show the accuracy and the acceleration effect of the novel methods. Case studies of a real data problem indicate that the developed methods can produce meaningful featured regularized solutions.

Citation: Ye Zhang, Bernd Hofmann. Two new non-negativity preserving iterative regularization methods for ill-posed inverse problems. Inverse Problems & Imaging, doi: 10.3934/ipi.2020062
##### References:

show all references

##### References:
The evolution of L2-norm relative errors 'L2Err' for different methods for Example 1 with noise levels $h' = \delta' = 5\%$. Upper (left): Algorithm 2; Upper (right): Algorithm 1; Lower (left): Landweber P1; Lower (right): Landweber P2
The estimated rate constant distribution by Algorithm 1
The measured individual responses and the simulated responses by Algorithm 1
The iterative number $k^*$ and the corresponding relative error L2Err vs $\mathbf{G}$. $h' = \delta' = 0.1\%$. $C^\dagger = 1.1, \tau_0 = 1.1$ in Algorithms 1 and 2, and $\alpha_k = 1/k$ in Algorithm 2
 $\mathbf{G}$ Algorithm 1 Algorithm 2 Example 1 Example 2 Example 1 Example 2 L2Err $k^*$ L2Err $k^*$ L2Err $k^*$ L2Err $k^*$ $\mathbf{G}_1$ 0.0138 $N_{\max}=10^6$ 0.0006 228910 0.0009 $N_{\max}=10^6$ 0.0037 $N_{\max}=10^6$ $\mathbf{G}_2$ 0.0086 $N_{\max}=10^6$ 0.0013 64526 2.0745e-5 $N_{\max}=10^6$ 0.0002 129082 $\mathbf{G}_3$ 0.0003 188765 0.0467 122507 8.8714e-5 $N_{\max}=10^6$ 0.0243 594791 $\mathbf{G}_4$ 0.0002 24696 0.0506 13537 0.0004 37974 0.0293 41965 $\mathbf{G}_5$ 0.0318 20647 0.0022 35901 0.0229 27229 0.0012 75392 $\mathbf{G}_6$ 0.0649 38976 0.0562 7626 0.0142 52076 0.0116 38853 $\mathbf{G}_7$ 0.0002 79863 0.0074 13138 0.0003 56564 0.0016 67004 $\mathbf{G}_8$ 0.0570 12326 0.0526 18004 0.0215 24315 0.0159 91825
 $\mathbf{G}$ Algorithm 1 Algorithm 2 Example 1 Example 2 Example 1 Example 2 L2Err $k^*$ L2Err $k^*$ L2Err $k^*$ L2Err $k^*$ $\mathbf{G}_1$ 0.0138 $N_{\max}=10^6$ 0.0006 228910 0.0009 $N_{\max}=10^6$ 0.0037 $N_{\max}=10^6$ $\mathbf{G}_2$ 0.0086 $N_{\max}=10^6$ 0.0013 64526 2.0745e-5 $N_{\max}=10^6$ 0.0002 129082 $\mathbf{G}_3$ 0.0003 188765 0.0467 122507 8.8714e-5 $N_{\max}=10^6$ 0.0243 594791 $\mathbf{G}_4$ 0.0002 24696 0.0506 13537 0.0004 37974 0.0293 41965 $\mathbf{G}_5$ 0.0318 20647 0.0022 35901 0.0229 27229 0.0012 75392 $\mathbf{G}_6$ 0.0649 38976 0.0562 7626 0.0142 52076 0.0116 38853 $\mathbf{G}_7$ 0.0002 79863 0.0074 13138 0.0003 56564 0.0016 67004 $\mathbf{G}_8$ 0.0570 12326 0.0526 18004 0.0215 24315 0.0159 91825
Comparison with the projected Landweber methods. The CPU time is measured in seconds
 $(h', \delta')$ $(0.1\%, 0.1\%)$ $(1\%, 1\%)$ $(5\%, 5\%)$ Example 1 Methods L2Err $k^*$ CPU L2Err $k^*$ CPU L2Err $k^*$ CPU Landweber P1 0.4310 $N_{\max}=10^6$ 3.6142e3 0.4528 370895 395.3281 0.5158 1130 0.0156 Landweber P2 0.4310 $N_{\max}=10^6$ 3.6257e3 0.4905 63599 2.3281 0.4964 43438 1.2813 Algorithm 1 0.0002 79863 44.7344 0.0008 63602 34.7969 0.0053 43438 19.5625 Algorithm 2 0.0003 56235 43.6212 0.0005 62941 47.3762 0.0021 60257 42.8194 Example 2 Methods L2Err $k^*$ CPU L2Err $k^*$ CPU L2Err $k^*$ CPU Landweber P1 0.9285 229498 1.0150e3 0.9360 57647 44.6563 0.9630 13 0.1719 Landweber P2 0.9611 1989 1.0313 0.9615 1573 0.7656 0.9619 1055 0.5469 Algorithm 1 0.0007 1999 4.4219 0.0030 1575 3.4063 0.0195 1059 2.4375 Algorithm 2 0.0002 3432 5.0292 0.0016 2162 5.0594 0.0025 4284 5.6638
 $(h', \delta')$ $(0.1\%, 0.1\%)$ $(1\%, 1\%)$ $(5\%, 5\%)$ Example 1 Methods L2Err $k^*$ CPU L2Err $k^*$ CPU L2Err $k^*$ CPU Landweber P1 0.4310 $N_{\max}=10^6$ 3.6142e3 0.4528 370895 395.3281 0.5158 1130 0.0156 Landweber P2 0.4310 $N_{\max}=10^6$ 3.6257e3 0.4905 63599 2.3281 0.4964 43438 1.2813 Algorithm 1 0.0002 79863 44.7344 0.0008 63602 34.7969 0.0053 43438 19.5625 Algorithm 2 0.0003 56235 43.6212 0.0005 62941 47.3762 0.0021 60257 42.8194 Example 2 Methods L2Err $k^*$ CPU L2Err $k^*$ CPU L2Err $k^*$ CPU Landweber P1 0.9285 229498 1.0150e3 0.9360 57647 44.6563 0.9630 13 0.1719 Landweber P2 0.9611 1989 1.0313 0.9615 1573 0.7656 0.9619 1055 0.5469 Algorithm 1 0.0007 1999 4.4219 0.0030 1575 3.4063 0.0195 1059 2.4375 Algorithm 2 0.0002 3432 5.0292 0.0016 2162 5.0594 0.0025 4284 5.6638
 [1] Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020074 [2] Matania Ben–Artzi, Joseph Falcovitz, Jiequan Li. The convergence of the GRP scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 1-27. doi: 10.3934/dcds.2009.23.1 [3] Thierry Horsin, Mohamed Ali Jendoubi. On the convergence to equilibria of a sequence defined by an implicit scheme. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020465 [4] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 [5] Matúš Tibenský, Angela Handlovičová. Convergence analysis of the discrete duality finite volume scheme for the regularised Heston model. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1181-1195. doi: 10.3934/dcdss.2020226 [6] Yi-Hsuan Lin, Gen Nakamura, Roland Potthast, Haibing Wang. Duality between range and no-response tests and its application for inverse problems. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020072 [7] Xinlin Cao, Huaian Diao, Jinhong Li. Some recent progress on inverse scattering problems within general polyhedral geometry. Electronic Research Archive, 2021, 29 (1) : 1753-1782. doi: 10.3934/era.2020090 [8] Mario Bukal. Well-posedness and convergence of a numerical scheme for the corrected Derrida-Lebowitz-Speer-Spohn equation using the Hellinger distance. Discrete & Continuous Dynamical Systems - A, 2021  doi: 10.3934/dcds.2021001 [9] Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1197-1212. doi: 10.3934/dcdss.2020234 [10] Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems & Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077 [11] Gernot Holler, Karl Kunisch. Learning nonlocal regularization operators. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021003 [12] Riccarda Rossi, Ulisse Stefanelli, Marita Thomas. Rate-independent evolution of sets. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 89-119. doi: 10.3934/dcdss.2020304 [13] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $L^2-$norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077 [14] Ole Løseth Elvetun, Bjørn Fredrik Nielsen. A regularization operator for source identification for elliptic PDEs. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021006 [15] Jie Zhang, Yuping Duan, Yue Lu, Michael K. Ng, Huibin Chang. Bilinear constraint based ADMM for mixed Poisson-Gaussian noise removal. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020071 [16] Claudia Lederman, Noemi Wolanski. An optimization problem with volume constraint for an inhomogeneous operator with nonstandard growth. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020391 [17] Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020382 [18] George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003 [19] Jian Zhang, Tony T. Lee, Tong Ye, Liang Huang. An approximate mean queue length formula for queueing systems with varying service rate. Journal of Industrial & Management Optimization, 2021, 17 (1) : 185-204. doi: 10.3934/jimo.2019106 [20] Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362

2019 Impact Factor: 1.373