\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Regularization of inverse problems via box constrained minimization

  • * corresponding author

    * corresponding author 

This work was supported by the Austrian Science Fund FWF under the grants P28008, I2271, and P30054 as well as partially by the Karl Popper Kolleg "Modeling-Simulation-Optimization", funded by the Alpen-Adria-Universität Klagenfurt and by the Carinthian Economic Promotion Fund (KWF)

Abstract Full Text(HTML) Figure(8) / Table(5) Related Papers Cited by
  • In the present paper we consider minimization based formulations of inverse problems $ (x, \Phi)\in \mbox{argmin}\left\{{ \mathcal{J}(x, \Phi;y)}:{(x, \Phi)\in M_{ad}(y)}\right\} $ for the specific but highly relevant case that the admissible set $ M_{ad}^\delta(y^\delta) $ is defined by pointwise bounds, which is the case, e.g., if $ L^\infty $ constraints on the parameter are imposed in the sense of Ivanov regularization, and the $ L^\infty $ noise level in the observations is prescribed in the sense of Morozov regularization. As application examples for this setting we consider three coefficient identification problems in elliptic boundary value problems.

    Discretization of $ (x, \Phi) $ with piecewise constant and piecewise linear finite elements, respectively, leads to finite dimensional nonlinear box constrained minimization problems that can numerically be solved via Gauss-Newton type SQP methods. In our computational experiments we revisit the suggested application examples. In order to speed up the computations and obtain exact numerical solutions we use recently developed active set methods for solving strictly convex quadratic programs with box constraints as subroutines within our Gauss-Newton type SQP approach.

    Mathematics Subject Classification: Primary: 65J22, 65M32; Secondary: 90C20.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Left: exact coefficient $ {b_f}_{ex} = c_{ex} = a_{ex} $. Right: locations of spots for testing weak * $ L^\infty $ convergence

    Figure 2.  Left: reconstructed coefficient $ c_k $; Middle: active set for lower bound; Right: active set for upper bound; For $ k = 1, 2, 3, 4 $ (top to bottom) and $ \delta = 0.1 $

    Figure 3.  Left: reconstructed coefficient $ c_k $; Middle: active set for lower bound; Right: active set for upper bound; For $ k = 1, 4, 8, 12 $ (top to bottom) and $ \delta = 0.01 $

    Figure 4.  Linear system solves (stars) and average size of linear systems (diamonds); Left: warm start; Right: cold start; Top: N = 32; Bottom: N = 64

    Figure 5.  Test 2: left: exact coefficient $ c_{ex} $; $ \underline{{c}} = -9 $, $ \overline{{c}} = 6 $; right: locations of spots for testing weak * $ L^\infty $ convergence

    Figure 6.  Test 2: Left: reconstructed coefficient $ c_k $; Middle: active set for lower bound; Right: active set for upper bound; For $ k = 1, 4, 8, 12 $ (top to bottom) and $ \delta = 0.01 $

    Figure 7.  Test 3: left: exact coefficient $ c_{ex} $; $ \underline{{c}} = -10 $, $ \overline{{c}} = 0 $; right: locations of spots for testing weak * $ L^\infty $ convergence

    Figure 8.  Test 3: Left: reconstructed coefficient $ c_k $; Middle: active set for lower bound; Right: active set for upper bound; For $ k = 1, 4, 8, 12 $ (top to bottom) and $ \delta = 0.01 $

    Table 1.  Generation of a primal feasible pair $ (\mathcal B, \mathcal D) $

    $ \mathcal B \leftarrow \mathcal B_s, \ \mathcal D \leftarrow \mathcal D_s $.
    while $ (\mathcal B, \mathcal D) $ not primal feasible:
    $ y= \text{KKT}(\mathcal B, \mathcal D), \ \mathcal B \leftarrow \mathcal B \cup \{i \in \mathcal N \setminus \mathcal B : y_{i} \geq u_{i} \}, \ \mathcal D \leftarrow \mathcal D \cup \{i \in \mathcal N \setminus \mathcal D : y_{i} \leq \ell_{i} \} $.
     | Show Table
    DownLoad: CSV

    Table 2.  Algorithmic description of the feasible active set method from [10]

    Feasible active set method for solving (34)
    Input: $ Q \succ 0 $, $ \ell, \ u, \ q \in \mathbb R^n, \ \ell< u $. $ \mathcal A, \ \mathcal C \subseteq \mathcal N, \ \mathcal A \cap \mathcal C = \emptyset $, $ (\mathcal A, \mathcal C) $ primal feasible.
    Output: $ (\mathcal A, \mathcal C) $ optimal for (34).
    $ [x, \alpha, \gamma] = $KKT$ (\mathcal A, \mathcal C) $
    while $ (\mathcal A, \mathcal C) $ not optimal for (34)
    $ \mathcal B_{s} \leftarrow \{i \in \mathcal A: \alpha_{i} \geq 0 \}; \mathcal B \leftarrow \mathcal B_{s} $.
    $ \mathcal D_{s} \leftarrow \{i \in \mathcal C: \gamma_{i} \geq 0 \}; \mathcal D \leftarrow \mathcal D_{s} $.
    $ y= $KKT$ (\mathcal B, \mathcal D) $.
    while $ (\mathcal B, \mathcal D) $ not primal feasible
    $ \mathcal B \leftarrow \mathcal B \cup \{ i \in \overline{\mathcal B}: y_{i} \geq u_{i} \} $.
    $ \mathcal D \leftarrow \mathcal D \cup \{ i \in \overline{\mathcal D}: y_{i} \leq \ell_{i} \} $.
    $ y= $KKT$ (\mathcal B, \mathcal D) $.
    endwhile
    Case 1: $ J(y)< J(x) $
    $ \mathcal A \leftarrow \mathcal B, \ \mathcal C \leftarrow \mathcal D $.
    Case 2: $ J(y) \geq J(x) $ and $ |\mathcal A|+|\mathcal C|=1 $.
    Let $ (\mathcal A_{opt}, \mathcal C_{opt}) $ be the optimal pair for (34) with the upper respectively lower
    bound on $ \mathcal A \cup \mathcal C =\{ j \} $ removed. $ (\mathcal A_{opt}, \mathcal C_{opt}) $ is optimal for (34), hence stop.
    Case 3: $ J(y) \geq J(x) $ and $ |\mathcal A| + |\mathcal C|> 1 $
    Choose $ \mathcal A_0 \subseteq \mathcal A $, $ \mathcal C_0 \subseteq \mathcal C $ with $ \mathcal A_0 \cup \mathcal C_0 \not = \emptyset $ such that $ x $ is feasible but not
    optimal for (36), for details see [10].
    Let $ (\mathcal B_{0}, \mathcal D_0) $ be the optimal pair for (36).
    $ \mathcal A \leftarrow \mathcal A_0 \cup \mathcal B_{0}, \ \mathcal C \leftarrow \mathcal C_0 \cup \mathcal D_0 $.
    $ [\alpha, \gamma]= $KKT$ (\mathcal A, \mathcal C) $
    endwhile
     | Show Table
    DownLoad: CSV

    Table 3.  Algorithmic description of the infeasible active set method from [11].

    Ineasible active set method for solving (34)
    Input: $ Q \succ 0 $, $ \ell, \ u, \ q \in \mathbb R^n, \ \ell< u $. $ \mathcal A, \ \mathcal C \subseteq \mathcal N, \ \mathcal A \cap \mathcal C = \emptyset $, $ (x, \alpha, \gamma) $
    Output: $ (\mathcal A, \mathcal C) $, $ (x, \alpha, \gamma) $ optimal for (34).
    $ {w} =\max\{\min\{x, u\}, \ell\} $
    while $ (\mathcal A, \mathcal C) $ not optimal for (34)
    $ \mathcal B \leftarrow \{i \not\in \mathcal A: x_{i} \geq u_i \}\cup \{i \in \mathcal A: \alpha_{i} \geq 0 \} $.
    $ \mathcal D \leftarrow \{i \not\in \mathcal C: x_{i} \leq l_i \}\cup\{i \in \mathcal C: \gamma_{i} \geq 0 \} $.
    $ [y, \beta, \delta]= $KKT$ (\mathcal B, \mathcal D) $; $ v =\max\{\min\{y, u\}, \ell\} $.
    Case 1: $ J(v)< J({w}) $
    Continue with $ [Q, \ell, u, q, \mathcal{B}, \mathcal{D}, (y, \beta, \delta)] $.
    Case 2: $ J(v) \geq J({w}) $.
    Determine $ \mathcal{A}^+ $, $ \mathcal{C}^+ $:
    If $ \ell_i<x_i<u_i $, $ i\not\in \mathcal{A} \cup \mathcal{C} $, then $ \mathcal{A}^+= \mathcal{A}\setminus \mbox{ arg min}_{i\in\mathcal{A}}\alpha_i $, $ \mathcal{C}^+= \mathcal{C}\setminus \mbox{ arg min}_{i\in\mathcal{C}}\gamma_i $,
    else $ \mathcal{A}^+= \mathcal{A}\cup \{i\not\in\mathcal{A}:x_{i} \geq u_i \} $,$ \mathcal{C}^+= \mathcal{C}\cup \{i\not\in\mathcal{C}:x_{i} \leq \ell_i \} $.
    $ [y, \beta, \delta]= $KKT$ (\mathcal{A}^+, \mathcal{C}^+) $.
    If $ \ell\leq y\leq u $ then continue with $ [Q, \ell, u, q, \mathcal{A}^+, \mathcal{C}^+, (y, \beta, \delta)] $.
    else determine $ \lambda^* $ by a combinatorial line seach:
    $ \lambda^* = \mbox{arg min}_{j\in \{i:y_i>u_i\}\cup\{i:y_i<\ell_i\}} J(\max\{\min\{{z}(\lambda_j), u\}, \ell\}) $
    where $\lambda_{j}:=\left\{\begin{array}{ll} \frac{y_{j}-u_{j}}{y_{j}-w_{j}} & \text { for } j \in\left\{i: y_{i}>u_{i}\right\} \\ \frac{\ell_{j}-y_{j}}{w_{j}-y_{j}} & \text { for } j \in\left\{i: y_{i} < \ell_{i}\right\} \end{array} ; \quad z(\lambda)=\lambda w+(1-\lambda) y\right. $;
    $ x^+={z}(\lambda_*) $; $ -\alpha^+_{\tilde{\mathcal{A}}^+} = q_{\tilde{\mathcal{A}}^+} + Q_{\tilde{\mathcal{A}}^+, \mathcal N} x^+ $; $ \gamma^+_{\mathcal C^+} = q_{\mathcal C^+} + Q_{\mathcal C^+, \mathcal N} x^+ $,
    where $ \tilde{\mathcal{A}}^+=\mathcal{A}^+\cup(\mathcal{N}\setminus\mathcal{C}^+) $;
    continue with $ [Q, \ell, u, q, \mathcal{A}^+, \mathcal{C}^+, (x^+, \alpha^+, \gamma^+)] $.
    endwhile
     | Show Table
    DownLoad: CSV

    Table 4.  Comparison of different solvers for QPs with box constraints

    source
    N=32 (n=2178) N=64 (n=8450)
    $\mathtt{quadprog}$ $\mathtt{Infeas\_{AS}}$ $\mathtt{Feas\_AS}$ $\mathtt{quadprog}$ $\mathtt{Infeas\_{AS}}$ $\mathtt{Feas\_AS}$
    $ k $ 1 1 1 1 1 1
    $ \frac{J(x_k^\delta, u_k^\delta)}{J(x_0, u_0)} $ 6.6901e-06 6.6154e-06 6.6154e-06 3.2887e-05 3.2532e-05 3.2744e-05
    $ \mbox{err}_{spot_1} $ 2.4023e-06 0 0 6.7780e-07 0 0
    $ \mbox{err}_{spot_2} $ 7.3665e-06 0 0 0.0462 0 0
    $ \mbox{err}_{spot_3} $ 1.0890e-05 0 0 1.3992 1.3917 1.3917
    $ \mbox{err}_{L^1(\Omega)} $ 0.0462 0.0465 0.0465 0.0832 0.0834 0.0834
    CPU 1.68 1.16 1.09 29.62 112.97 22.85
    potential
    N=32 (n=3137) N=64 (n=12417)
    $\mathtt{quadprog}$ $\mathtt{Infeas\_{AS}}$ $\mathtt{Feas\_AS}$ $\mathtt{quadprog}$ $\mathtt{Infeas\_{AS}}$ $\mathtt{Feas\_AS}$
    $ k $ 4 4 6 3 3 3
    $ \frac{J(x_k^\delta, u_k^\delta)}{J(x_0, u_0)} $ 6.2155e-06 6.0569e-06 8.0316e-07 1.6318e-04 1.6286e-04 1.6286e-04
    $ \mbox{err}_{spot_1} $ 6.9611e-10 0 0 1.2184e-10 0 0
    $ \mbox{err}_{spot_2} $ 1.7502e-06 0 0 0.7183 0.7145 0.7145
    $ \mbox{err}_{spot_3} $ 1.5392 1.6093 1.4363 3.2625 3.2629 3.2629
    $ \mbox{err}_{L^1(\Omega)} $ 0.1051 0.1044 0.0938 0.1460 0.1444 0.1444
    CPU 21.88 24.44 18.34 432.06 368.67 276.17
    diffusion
    N=32 (n=3137) N=64 (n=12417)
    $\mathtt{quadprog}$ $\mathtt{Infeas\_{AS}}$ $\mathtt{Feas\_AS}$ $\mathtt{quadprog}$ $\mathtt{Infeas\_{AS}}$ $\mathtt{Feas\_AS}$
    $ k $ 4 4 4 8 8 8
    $ \frac{J(x_k^\delta, u_k^\delta)}{J(x_0, u_0)} $ 0.0931 0.0931 0.0931 0.0543 0.0543 0.0543
    $ \mbox{err}_{spot_1} $ 3.7303e-14 0 0 3.6415e-14 0 0
    $ \mbox{err}_{spot_2} $ 4.4418 4.4418 4.4418 7.8278e-05 0 0
    $ \mbox{err}_{spot_3} $ 0.3200 0.3199 0.3199 4.3077e-13 0 0
    $ \mbox{err}_{L^1(\Omega)} $ 0.3259 0.3259 0.3259 0.3799 0.3799 0.3799
    CPU 32.01 6.30 5.25 1193.00 532.57 463.89
     | Show Table
    DownLoad: CSV

    Table 5.  Convergence as $ \delta\to0 $: averaged errors of five test runs with uniform noise

    source potential diffusion
    $ \delta $ 0.001 0.01 0.1 0.001 0.01 0.1 0.001 0.01 0.1
    $ \mbox{err}_{spot_1} $ 0 0.2000 0.2000 0 0 0 0 0 0
    $ \mbox{err}_{spot_2} $ 0 2.7488 4.0702 0 0.7960 4.8689 0 8.4141 9.8436
    $ \mbox{err}_{spot_3} $ 0 0.5678 1.9445 1.0840 2.1512 2.5862 0.6572 0 0
    $ \mbox{err}_{L^1(\Omega)} $ 0.0472 0.5288 0.5721 0.1472 0.2136 0.3671 0.7200 0.3783 0.3745
     | Show Table
    DownLoad: CSV
  • [1] E. BerettaM. V. de HoopF. Faucher and O. Scherzer, Inverse boundary value problem for the Helmholtz equation: Quantitative conditional Lipschitz stability estimates, SIAM J. Math. Anal., 48 (2016), 3962-3983.  doi: 10.1137/15M1043856.
    [2] M. BergouniouxM. HaddouM. Hintermüller and K. Kunisch, A comparison of a Moreau-Yosida-based active set strategy and interior point methods for constrained optimal control problems, SIAM J. Optim., 11 (2000), 495-521.  doi: 10.1137/S1052623498343131.
    [3] M. BergouniouxK. Ito and K. Kunisch, Primal-dual strategy for constrained optimal control problems, SIAM J. Optim., 37 (1999), 1176-1194.  doi: 10.1137/S0363012997328609.
    [4] L. Borcea, Electrical impedance tomography, Inverse Problems 18 (2002), R99–R136. doi: 10.1088/0266-5611/18/6/201.
    [5] M. Burger and W. Mühlhuber, Iterative regularization of parameter identification problems by sequential quadratic programming methods, Inverse Problems, 18 (2002), 943-969.  doi: 10.1088/0266-5611/18/4/301.
    [6] M. Burger and W. Mühlhuber, Numerical approximation of an SQP-type method for parameter identification, SIAM J. Numer. Anal., 40 (2002), 1775-1797.  doi: 10.1137/S0036142901389980.
    [7] C. Burstedde, On the numerical evaluation of fractional Sobolev norms, Comm. Pure Appl. Anal., 6 (2007), 587-605.  doi: 10.3934/cpaa.2007.6.587.
    [8] T. F. Coleman and Y. Li, A reflective Newton method for minimizing a quadratic function subject to bounds on some of the variables, SIAM J. Optim., 6 (1996), 1040-1058.  doi: 10.1137/S1052623494240456.
    [9] M. HintermüllerK. Ito and K. Kunisch, The primal-dual active set strategy as a semismooth Newton method, SIAM J. Optim., 13 (2003), 865-888.  doi: 10.1137/S1052623401383558.
    [10] P. Hungerländer and F. Rendl, A feasible active set method for strictly convex quadratic problems with simple bounds, SIAM J. Optim., 25 (2015), 1633-1659.  doi: 10.1137/140984439.
    [11] P. Hungerländer and F. Rendl, An infeasible active set method with combinatorial line search for convex quadratic problems with bound constraints, http://www.optimization-online.org/DB_HTML/2016/09/5644.html, J. Global Optim., (2018).
    [12] V. Isakov, Inverse Problems for Partial Differential Equations, 2nd Edition, Springer, New York, 2006.
    [13] J. J. Júdice and F. M. Pires, A block principal pivoting algorithm for large-scale strictly monotone linear complementarity problems, Comput. Oper. Res., 21 (1994), 587-596.  doi: 10.1016/0305-0548(94)90106-6.
    [14] B. Kaltenbacher, Regularization based on all-at-once formulations for inverse problems, SIAM J. Numer. Anal., 54 (2016), 2594-2618.  doi: 10.1137/16M1060984.
    [15] B. Kaltenbacher, Minimization based formulations of inverse problems and their regularization, SIAM J. Optim., 28 (2018), 620-645.  doi: 10.1137/17M1124036.
    [16] S. Kindermann, Convergence of the gradient method for ill-posed problems, Inverse Probl. Imaging, 11 (2017), 703-720.  doi: 10.3934/ipi.2017033.
    [17] I. Knowles, A variational algorithm for electrical impedance tomography, Inverse Problems, 14 (1998), 1513-1525.  doi: 10.1088/0266-5611/14/6/010.
    [18] R. V. Kohn and A. McKenney, Numerical implementation of a variational method for electrical impedance tomography, Inverse Problems, 6 (1990), 389-414.  doi: 10.1088/0266-5611/6/3/009.
    [19] R. V. Kohn and M. Vogelius, Relaxation of a variational method for impedance computed tomography, Comm. Pure Appl. Math., 40 (1987), 745-777.  doi: 10.1002/cpa.3160400605.
    [20] K. Kunisch and F. Rendl, An infeasible active set method for quadratic problems with simple bounds, SIAM J. Optim., 14 (2003), 35-52.  doi: 10.1137/S1052623400376135.
  • 加载中

Figures(8)

Tables(5)

SHARE

Article Metrics

HTML views(800) PDF downloads(281) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return