August  2020, 19(8): 4111-4126. doi: 10.3934/cpaa.2020183

Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems

Institute of Mathematics, University of Potsdam, Karl-Liebknecht-Straße 24-25, 14476 Potsdam, Germany

Received  August 2019 Revised  January 2020 Published  May 2020

Fund Project: This research has been partially funded by Deutsche Forschungsgemeinschaft (DFG)-SFB1294/1-318763901

In this paper, we consider the nonlinear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered to reconstruct the estimator from the random noisy data. In this statistical learning setting, we derive the rates of convergence for the regularized solution under certain assumptions on the nonlinear forward operator and the prior assumptions. We discuss estimates of the reconstruction error using the approach of reproducing kernel Hilbert spaces.

Citation: Abhishake Rastogi. Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4111-4126. doi: 10.3934/cpaa.2020183
References:
[1]

Abhishake, G. Blanchard and P. Mathé, Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems, preprint, arXiv: 1902.05404. Google Scholar

[2]

N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc., 68 (1950), 337-404.  doi: 10.2307/1990404.  Google Scholar

[3]

F. BauerT. Hohage and A. Munk, Iteratively regularized Gauss–Newton method for nonlinear inverse problems with random noise, SIAM J. Numer. Anal., 47 (2009), 1827-1846.  doi: 10.1137/080721789.  Google Scholar

[4]

N. BissantzT. Hohage and A. Munk, Consistency and rates of convergence of nonlinear Tikhonov regularization with random noise, Inverse Probl., 20 (2004), 1773-1789.  doi: 10.1088/0266-5611/20/6/005.  Google Scholar

[5]

G. Blanchard and P. Mathé, Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration, Inverse Probl., 28 (2012), Art. 115011. doi: 10.1088/0266-5611/28/11/115011.  Google Scholar

[6]

G. Blanchard, P. Mathé and N. Mücke, Lepskii Principle in Supervised Learning, preprint, arXiv: 1905.10764. Google Scholar

[7]

G. Blanchard and N. Mücke, Optimal rates for regularization of statistical inverse learning problems, Found. Comput. Math., 18 (2018), 971-1013.  doi: 10.1007/s10208-017-9359-7.  Google Scholar

[8]

G. Blanchard and N. Mucke, Kernel Regression, Minimax Rates and Effective Dimensionality: Beyond the Regular Case, Anal. Appl., to Appear (2020). doi: 10.1142/S0219530519500258.  Google Scholar

[9]

A. BöttcherB. HofmannU. Tautenhahn and M. Yamamoto, Convergence rates for Tikhonov regularization from different kinds of smoothness conditions, Appl. Anal., 85 (2006), 555-578.  doi: 10.1080/00036810500474838.  Google Scholar

[10]

A. Caponnetto and E. De Vito, Optimal rates for the regularized least-squares algorithm, Found. Comput. Math., 7 (2007), 331-368.  doi: 10.1007/s10208-006-0196-8.  Google Scholar

[11]

L. Cavalier, Inverse problems in statistics, in Inverse Probl. High-dimensional Estim., vol. 203 of Lect. Notes Stat. Proc., Springer, Heidelberg, (2011), 3–96. doi: 10.1007/978-3-642-19989-9_1.  Google Scholar

[12]

H. Egger and B. Hofmann, Tikhonov regularization in Hilbert scales under conditional stability assumptions, Inverse Probl., 34 (2018), Art. 115015. doi: 10.1088/1361-6420/aadef4.  Google Scholar

[13]

H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Math. Appl., vol. 375, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996.  Google Scholar

[14]

Z. C. Guo, S. B. Lin and D. X. Zhou, Learning theory of distributed spectral algorithms, Inverse Probl., 33 (2017), Art. 74009. doi: 10.1088/1361-6420/aa72b2.  Google Scholar

[15]

B. Hofmann, Regularization for Applied Inverse and Ill-Posed Problems, vol. 85, BSB BG Teubner Verlagsgesellschaft, Leipzig, 1986. doi: 10.1007/978-3-322-93034-7.  Google Scholar

[16]

B. Hofmann and P. Mathé, Tikhonov regularization with oversmoothing penalty for non-linear ill-posed problems in Hilbert scales, Inverse Probl., 34 (2018), Art. 15007. doi: 10.1088/1361-6420/aa9b59.  Google Scholar

[17]

T. Hohage and M. Pricop, Nonlinear Tikhonov regularization in Hilbert scales for inverse boundary value problems with random noise, Inverse Probl. Imaging, 2 (2008), 271-290.  doi: 10.3934/ipi.2008.2.271.  Google Scholar

[18]

J. KrebsA. K. Louis and H. Wendland, Sobolev error estimates and a priori parameter selection for semi-discrete Tikhonov regularization, J. Inverse Ill-Posed Probl., 17 (2009), 845-869.  doi: 10.1515/JIIP.2009.050.  Google Scholar

[19]

K. LinS. Lu and P. Mathé, Oracle-type posterior contraction rates in Bayesian inverse problems, Inverse Probl. Imaging, 9 (2015), 895-915.  doi: 10.3934/ipi.2015.9.895.  Google Scholar

[20]

S. B. Lin and D. X. Zhou, Optimal Learning Rates for Kernel Partial Least Squares, J. Fourier Anal. Appl., 24 (2018), 908-933.  doi: 10.1007/s00041-017-9544-8.  Google Scholar

[21]

J. M. Loubes and C. Ludena, Penalized estimators for non linear inverse problems, ESAIM Probab. Statist., 14 (2010), 173-191.  doi: 10.1051/ps:2008024.  Google Scholar

[22]

S. LuP. Mathé and S. V. Pereverzev, Balancing principle in supervised learning for a general regularization scheme, Appl. Comput. Harmon. Anal., 48 (2020), 123-148.  doi: 10.1016/j.acha.2018.03.001.  Google Scholar

[23]

P. Mathé and U. Tautenhahn, Interpolation in variable Hilbert scales with application to inverse problems, Inverse Probl., 22 (2006), 2271-2297.  doi: 10.1088/0266-5611/22/6/022.  Google Scholar

[24]

C. A. Micchelli and M. Pontil, On learning vector-valued functions, Neural Comput., 17 (2005), 177-204.  doi: 10.1162/0899766052530802.  Google Scholar

[25]

M. T. Nair and S. V. Pereverzev, Regularized collocation method for Fredholm integral equations of the first kind, J. Complexity, 23 (2007), 454-467.  doi: 10.1016/j.jco.2006.09.002.  Google Scholar

[26]

M. T. NairS. V. Pereverzev and U. Tautenhahn, Regularization in Hilbert scales under general smoothing conditions, Inverse Probl., 21 (2005), 1851-1869.  doi: 10.1088/0266-5611/21/6/003.  Google Scholar

[27]

F. Natterer, Error bounds for Tikhonov regularization in Hilbert scales, Appl. Anal., 18 (1984), 29-37.  doi: 10.1080/00036818408839508.  Google Scholar

[28]

A. Neubauer, Tikhonov regularization of nonlinear ill-posed problems in Hilbert scales, Appl. Anal., 46 (1992), 59-72.  doi: 10.1080/00036819208840111.  Google Scholar

[29]

F. O'Sullivan, Convergence characteristics of methods of regularization estimators for nonlinear operator equations, SIAM J. Numer. Anal., 27 (1990), 1635-1649.  doi: 10.1137/0727096.  Google Scholar

[30]

A. Rastogi and S. Sampath, Optimal rates for the regularized learning algorithms under general source condition, Front. Appl. Math. Stat., 3 (2017), Art. 3. doi: 10.3389/fams.2017.00003.  Google Scholar

[31]

T. Schuster, B. Kaltenbacher, B. Hofmann and K. S. Kazimierski, Regularization methods in Banach spaces, Radon Series on Computational and Applied Mathematics, vol. 10, Walter de Gruyter GmbH & Co. KG, Berlin, 2012. doi: 10.1515/9783110255720.  Google Scholar

[32]

U. Tautenhahn, Error estimates for regularization methods in Hilbert scales, SIAM J. Numer. Anal., 33 (1996), 2120-2130.  doi: 10.1137/S0036142994269411.  Google Scholar

[33]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed Problems, vol. 14, W. H. Winston, Washington, DC, 1977.  Google Scholar

[34]

F. Werner and B. Hofmann, Convergence analysis of (statistical) inverse problems under conditional stability estimates, Inverse Probl., 36 (2020), Art. 015004. doi: 10.1088/1361-6420/ab4cd7.  Google Scholar

[35]

T. Zhang, Effective dimension and generalization of kernel learning, in Proc. 15th Int. Conf. Neural Inf. Process. Syst., MIT Press, Cambridge, MA, (2002), 454–461. Google Scholar

show all references

References:
[1]

Abhishake, G. Blanchard and P. Mathé, Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems, preprint, arXiv: 1902.05404. Google Scholar

[2]

N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc., 68 (1950), 337-404.  doi: 10.2307/1990404.  Google Scholar

[3]

F. BauerT. Hohage and A. Munk, Iteratively regularized Gauss–Newton method for nonlinear inverse problems with random noise, SIAM J. Numer. Anal., 47 (2009), 1827-1846.  doi: 10.1137/080721789.  Google Scholar

[4]

N. BissantzT. Hohage and A. Munk, Consistency and rates of convergence of nonlinear Tikhonov regularization with random noise, Inverse Probl., 20 (2004), 1773-1789.  doi: 10.1088/0266-5611/20/6/005.  Google Scholar

[5]

G. Blanchard and P. Mathé, Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration, Inverse Probl., 28 (2012), Art. 115011. doi: 10.1088/0266-5611/28/11/115011.  Google Scholar

[6]

G. Blanchard, P. Mathé and N. Mücke, Lepskii Principle in Supervised Learning, preprint, arXiv: 1905.10764. Google Scholar

[7]

G. Blanchard and N. Mücke, Optimal rates for regularization of statistical inverse learning problems, Found. Comput. Math., 18 (2018), 971-1013.  doi: 10.1007/s10208-017-9359-7.  Google Scholar

[8]

G. Blanchard and N. Mucke, Kernel Regression, Minimax Rates and Effective Dimensionality: Beyond the Regular Case, Anal. Appl., to Appear (2020). doi: 10.1142/S0219530519500258.  Google Scholar

[9]

A. BöttcherB. HofmannU. Tautenhahn and M. Yamamoto, Convergence rates for Tikhonov regularization from different kinds of smoothness conditions, Appl. Anal., 85 (2006), 555-578.  doi: 10.1080/00036810500474838.  Google Scholar

[10]

A. Caponnetto and E. De Vito, Optimal rates for the regularized least-squares algorithm, Found. Comput. Math., 7 (2007), 331-368.  doi: 10.1007/s10208-006-0196-8.  Google Scholar

[11]

L. Cavalier, Inverse problems in statistics, in Inverse Probl. High-dimensional Estim., vol. 203 of Lect. Notes Stat. Proc., Springer, Heidelberg, (2011), 3–96. doi: 10.1007/978-3-642-19989-9_1.  Google Scholar

[12]

H. Egger and B. Hofmann, Tikhonov regularization in Hilbert scales under conditional stability assumptions, Inverse Probl., 34 (2018), Art. 115015. doi: 10.1088/1361-6420/aadef4.  Google Scholar

[13]

H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Math. Appl., vol. 375, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996.  Google Scholar

[14]

Z. C. Guo, S. B. Lin and D. X. Zhou, Learning theory of distributed spectral algorithms, Inverse Probl., 33 (2017), Art. 74009. doi: 10.1088/1361-6420/aa72b2.  Google Scholar

[15]

B. Hofmann, Regularization for Applied Inverse and Ill-Posed Problems, vol. 85, BSB BG Teubner Verlagsgesellschaft, Leipzig, 1986. doi: 10.1007/978-3-322-93034-7.  Google Scholar

[16]

B. Hofmann and P. Mathé, Tikhonov regularization with oversmoothing penalty for non-linear ill-posed problems in Hilbert scales, Inverse Probl., 34 (2018), Art. 15007. doi: 10.1088/1361-6420/aa9b59.  Google Scholar

[17]

T. Hohage and M. Pricop, Nonlinear Tikhonov regularization in Hilbert scales for inverse boundary value problems with random noise, Inverse Probl. Imaging, 2 (2008), 271-290.  doi: 10.3934/ipi.2008.2.271.  Google Scholar

[18]

J. KrebsA. K. Louis and H. Wendland, Sobolev error estimates and a priori parameter selection for semi-discrete Tikhonov regularization, J. Inverse Ill-Posed Probl., 17 (2009), 845-869.  doi: 10.1515/JIIP.2009.050.  Google Scholar

[19]

K. LinS. Lu and P. Mathé, Oracle-type posterior contraction rates in Bayesian inverse problems, Inverse Probl. Imaging, 9 (2015), 895-915.  doi: 10.3934/ipi.2015.9.895.  Google Scholar

[20]

S. B. Lin and D. X. Zhou, Optimal Learning Rates for Kernel Partial Least Squares, J. Fourier Anal. Appl., 24 (2018), 908-933.  doi: 10.1007/s00041-017-9544-8.  Google Scholar

[21]

J. M. Loubes and C. Ludena, Penalized estimators for non linear inverse problems, ESAIM Probab. Statist., 14 (2010), 173-191.  doi: 10.1051/ps:2008024.  Google Scholar

[22]

S. LuP. Mathé and S. V. Pereverzev, Balancing principle in supervised learning for a general regularization scheme, Appl. Comput. Harmon. Anal., 48 (2020), 123-148.  doi: 10.1016/j.acha.2018.03.001.  Google Scholar

[23]

P. Mathé and U. Tautenhahn, Interpolation in variable Hilbert scales with application to inverse problems, Inverse Probl., 22 (2006), 2271-2297.  doi: 10.1088/0266-5611/22/6/022.  Google Scholar

[24]

C. A. Micchelli and M. Pontil, On learning vector-valued functions, Neural Comput., 17 (2005), 177-204.  doi: 10.1162/0899766052530802.  Google Scholar

[25]

M. T. Nair and S. V. Pereverzev, Regularized collocation method for Fredholm integral equations of the first kind, J. Complexity, 23 (2007), 454-467.  doi: 10.1016/j.jco.2006.09.002.  Google Scholar

[26]

M. T. NairS. V. Pereverzev and U. Tautenhahn, Regularization in Hilbert scales under general smoothing conditions, Inverse Probl., 21 (2005), 1851-1869.  doi: 10.1088/0266-5611/21/6/003.  Google Scholar

[27]

F. Natterer, Error bounds for Tikhonov regularization in Hilbert scales, Appl. Anal., 18 (1984), 29-37.  doi: 10.1080/00036818408839508.  Google Scholar

[28]

A. Neubauer, Tikhonov regularization of nonlinear ill-posed problems in Hilbert scales, Appl. Anal., 46 (1992), 59-72.  doi: 10.1080/00036819208840111.  Google Scholar

[29]

F. O'Sullivan, Convergence characteristics of methods of regularization estimators for nonlinear operator equations, SIAM J. Numer. Anal., 27 (1990), 1635-1649.  doi: 10.1137/0727096.  Google Scholar

[30]

A. Rastogi and S. Sampath, Optimal rates for the regularized learning algorithms under general source condition, Front. Appl. Math. Stat., 3 (2017), Art. 3. doi: 10.3389/fams.2017.00003.  Google Scholar

[31]

T. Schuster, B. Kaltenbacher, B. Hofmann and K. S. Kazimierski, Regularization methods in Banach spaces, Radon Series on Computational and Applied Mathematics, vol. 10, Walter de Gruyter GmbH & Co. KG, Berlin, 2012. doi: 10.1515/9783110255720.  Google Scholar

[32]

U. Tautenhahn, Error estimates for regularization methods in Hilbert scales, SIAM J. Numer. Anal., 33 (1996), 2120-2130.  doi: 10.1137/S0036142994269411.  Google Scholar

[33]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed Problems, vol. 14, W. H. Winston, Washington, DC, 1977.  Google Scholar

[34]

F. Werner and B. Hofmann, Convergence analysis of (statistical) inverse problems under conditional stability estimates, Inverse Probl., 36 (2020), Art. 015004. doi: 10.1088/1361-6420/ab4cd7.  Google Scholar

[35]

T. Zhang, Effective dimension and generalization of kernel learning, in Proc. 15th Int. Conf. Neural Inf. Process. Syst., MIT Press, Cambridge, MA, (2002), 454–461. Google Scholar

[1]

Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems & Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048

[2]

Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020463

[3]

Petr Čoupek, María J. Garrido-Atienza. Bilinear equations in Hilbert space driven by paths of low regularity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 121-154. doi: 10.3934/dcdsb.2020230

[4]

Weihong Guo, Yifei Lou, Jing Qin, Ming Yan. IPI special issue on "mathematical/statistical approaches in data science" in the Inverse Problem and Imaging. Inverse Problems & Imaging, 2021, 15 (1) : I-I. doi: 10.3934/ipi.2021007

[5]

Philipp Harms. Strong convergence rates for markovian representations of fractional processes. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020367

[6]

Xiuli Xu, Xueke Pu. Optimal convergence rates of the magnetohydrodynamic model for quantum plasmas with potential force. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 987-1010. doi: 10.3934/dcdsb.2020150

[7]

Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020074

[8]

Kien Trung Nguyen, Vo Nguyen Minh Hieu, Van Huy Pham. Inverse group 1-median problem on trees. Journal of Industrial & Management Optimization, 2021, 17 (1) : 221-232. doi: 10.3934/jimo.2019108

[9]

Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367

[10]

Jianli Xiang, Guozheng Yan. The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021004

[11]

Shahede Omidi, Jafar Fathali. Inverse single facility location problem on a tree with balancing on the distance of server to clients. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2021017

[12]

Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021006

[13]

Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021005

[14]

Gernot Holler, Karl Kunisch. Learning nonlocal regularization operators. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021003

[15]

Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010

[16]

Ole Løseth Elvetun, Bjørn Fredrik Nielsen. A regularization operator for source identification for elliptic PDEs. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021006

[17]

Bahaaeldin Abdalla, Thabet Abdeljawad. Oscillation criteria for kernel function dependent fractional dynamic equations. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020443

[18]

Xiaoming Wang. Upper semi-continuity of stationary statistical properties of dissipative systems. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 521-540. doi: 10.3934/dcds.2009.23.521

[19]

Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045

[20]

George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003

2019 Impact Factor: 1.105

Metrics

  • PDF downloads (48)
  • HTML views (71)
  • Cited by (0)

Other articles
by authors

[Back to Top]