# American Institute of Mathematical Sciences

December  2017, 11(6): 975-996. doi: 10.3934/ipi.2017045

## A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method

 Department of Mathematics, University of Alabama, Box 870350, Tuscaloosa, AL 35487, USA

* Corresponding author: Wei Zhu

Received  June 2016 Revised  January 2017 Published  September 2017

In this paper, we propose a new augmented Lagrangian method for the mean curvature based image denoising model [33]. Different from the previous works in [21,35], this new method only involves two Lagrange multipliers, which significantly reduces the effort of choosing appropriate penalization parameters to ensure the convergence of the iterative process of finding the associated saddle points. With this new algorithm, we demonstrate the features of the model numerically, including the preservation of image contrasts and object corners, as well as its capability of generating smooth patches of image graphs. The data selection property and the role of the spatial mesh size for the model performance are also discussed.

Citation: Wei Zhu. A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method. Inverse Problems & Imaging, 2017, 11 (6) : 975-996. doi: 10.3934/ipi.2017045
##### References:
 [1] L. Ambrosio and S. Masnou, A direct variational approach to a problem arising in image reconstruction, Interfaces Free Bound., 5 (2003), 63-81. doi: 10.4171/IFB/72. Google Scholar [2] L. Ambrosio and S. Masnou, On a variational problem arising in image reconstruction, Free Boundary Problems (Trento, 2002), Internat. Ser. Numer. Math., Birkh"/auser, Basel, 147 (2004), 17-26. Google Scholar [3] E. Bae, J. Shi and X. C. Tai, Graph cuts for curvature based image denoising, IEEE Trans. on Image Process, 20 (2011), 1199-1210. doi: 10.1109/TIP.2010.2090533. Google Scholar [4] G. Bellettini, V. Caselles and M. Novaga, The total variation flow in $\mathbb{R}^n$, J. Differ. Equations, 184 (2002), 475-525. doi: 10.1006/jdeq.2001.4150. Google Scholar [5] C. Brito-Loeza and K. Chen, Multigrid algorithm for high order denoising, SIAM J. Imaging. Sciences, 3 (2010), 363-389. doi: 10.1137/080737903. Google Scholar [6] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vis., 20 (2004), 89-97. doi: 10.1023/B:JMIV.0000011320.81911.38. Google Scholar [7] T. Chan, G. H. Golub and P. Mulet, A nonlinear primal-dual method for total variation-based image restoration, SIAM J. Sci. Comput., 20 (1999), 1964-1977. doi: 10.1137/S1064827596299767. Google Scholar [8] T. Chan and S. Esedoglu, Aspects of total variation regularized L1 function approximation, SIAM J. Appl. Math., 65 (2005), 1817-1837. doi: 10.1137/040604297. Google Scholar [9] T. Chan, S. Esedoḡlu, F. Park and M. H. Yip, Recent Developments in Total Variation Image Restoration, In Handbook of Mathematical Models in Computer Vision. Springer Verlag, 2005. Edt by N. Paragios, Y. Chen, O. Faugeras.Google Scholar [10] T. Chan, S. H. Kang and J. H. Shen, Euler's elastica and curvature based inpaintings, SIAM J. Appl. Math., 63 (2002), 564-592. doi: 10.1137/S0036139901390088. Google Scholar [11] M. P. do Carmo, Differential Geometry of Curves and Surfaces, Prentice-Hall, Inc., 1976. Google Scholar [12] Y. Duan, Y. Wang, X.-C. Tai and J. Hahn, A fast augmented Lagrangian method for Euler's elastica model, SSVM 2011, LSCS, 6667 (2012), 144-156. Google Scholar [13] J. Eckstein and W. Yao, Understanding the Convergence of the Alternating Direction Method of Multipliers: Theoretical and Computational Perspectives, Pac. J. Optim., 11 (2015), 619-644. Google Scholar [14] N. El-Zehiry and L. Grady, Fast global optimization of curvature, Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010), 3257-3264. Google Scholar [15] T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343. doi: 10.1137/080725891. Google Scholar [16] M. Hintermüller, C. N. Rautenberg and J. Hahn, Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction, Inverse Problems, 30 (2014), 055014, 34pp. doi: 10.1088/0266-5611/30/5/055014. Google Scholar [17] R. March and M. Dozio, A variational method for the recovery of smooth boundaries, Image and Vision Computing, 15 (1997), 705-712. doi: 10.1016/S0262-8856(97)00002-4. Google Scholar [18] S. Masnou, Disocclusion: A variational approach using level lines, IEEE Trans. Image Process., 11 (2002), 68-76. doi: 10.1109/83.982815. Google Scholar [19] S. Masnou and J. M. Morel, Level lines based disocclusion, Proc. IEEE Int. Conf. on Image Processing, Chicago, IL, (1998), 259-263.Google Scholar [20] Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, University Lecture Series, Vol 22, American Mathematical Society, Providence, RI, 2001. doi: 10.1090/ulect/022. Google Scholar [21] M. Myllykoski, R. Glowinski, T. Kárkkáinen and T. Rossi, A new augmented Lagrangian approach for L1-mean curvature image denoising, SIAM J. Imaging Sci., 8 (2015), 95-125. doi: 10.1137/140962164. Google Scholar [22] M. Nitzberg, D. Mumford and T. Shiota, Filering, Segmentation, and Depth, Lecture Notes in Computer Science, Vol. 662, Springer Verlag, Berlin, 1993. doi: 10.1007/3-540-56484-5. Google Scholar [23] S. Osher, M. Burger, D. Goldfarb, J. J. Xu and W. T. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), 460-489. doi: 10.1137/040605412. Google Scholar [24] R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Mathematics of Operations Research, 1 (1976), 97-116. doi: 10.1287/moor.1.2.97. Google Scholar [25] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithm, Physica D, 60 (1992), 259-268. doi: 10.1016/0167-2789(92)90242-F. Google Scholar [26] T. Schoenemann, F. Kahl and D. Cremers, Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation, IEEE International Conference on Computer Vision (ICCV), 2009. doi: 10.1109/ICCV.2009.5459209. Google Scholar [27] T. Schoenemann, F. Kahl, S. Masnou and D. Cremers, A linear framework for region-based image segmentation and inpainting involving curvature penalization, Int. J. Comput. Vision, 99 (2012), 53-68. doi: 10.1007/s11263-012-0518-7. Google Scholar [28] D. Strong and T. Chan, Edge-preserving and scale-dependent properties of total variation regularization, Inverse Problem, 19 (2003), 165-187. doi: 10.1088/0266-5611/19/6/059. Google Scholar [29] X. C. Tai, J. Hahn and G. J. Chung, A fast algorithm for Euler's Elastica model using augmented Lagrangian method, SIAM J. Imaging Sciences, 4 (2011), 313-344. doi: 10.1137/100803730. Google Scholar [30] C. Wu and X. C. Tai, Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, Vectorial TV, and high order models, SIAM J. Imaging Sciences, 3 (2010), 300-339. doi: 10.1137/090767558. Google Scholar [31] F. Yang, K. Chen and B. Yu, Homotopy method for a mean curvature-based denoising model, Appl. Numer. Math., 62 (2012), 185-200. doi: 10.1016/j.apnum.2011.12.001. Google Scholar [32] W. Zhu and T. Chan, A variational model for capturing illusory contours using curvature,, J. Math. Imaging Vision, 27 (2007), 29-40. doi: 10.1007/s10851-006-9695-8. Google Scholar [33] W. Zhu and T. Chan, Image denoising using mean curvature of image surface,, SIAM J. Imaging Sciences, 5 (2012), 1-32. doi: 10.1137/110822268. Google Scholar [34] W. Zhu, T. Chan and S. Esedoḡlu, Segmentation with depth: A level set approach, SIAM J. Sci. Comput., 28 (2006), 1957-1973. doi: 10.1137/050622213. Google Scholar [35] W. Zhu, X. C. Tai and T. Chan, Augmented Lagrangian method for a mean curvature based image denoising model, Inverse Probl. Imag., 7 (2013), 1409-1432. doi: 10.3934/ipi.2013.7.1409. Google Scholar [36] W. Zhu, X. C. Tai and T. Chan, Image segmentation using Euler's elastica as the regularization, J. Sci. Comput., 57 (2013), 414-438. doi: 10.1007/s10915-013-9710-3. Google Scholar

show all references

##### References:
 [1] L. Ambrosio and S. Masnou, A direct variational approach to a problem arising in image reconstruction, Interfaces Free Bound., 5 (2003), 63-81. doi: 10.4171/IFB/72. Google Scholar [2] L. Ambrosio and S. Masnou, On a variational problem arising in image reconstruction, Free Boundary Problems (Trento, 2002), Internat. Ser. Numer. Math., Birkh"/auser, Basel, 147 (2004), 17-26. Google Scholar [3] E. Bae, J. Shi and X. C. Tai, Graph cuts for curvature based image denoising, IEEE Trans. on Image Process, 20 (2011), 1199-1210. doi: 10.1109/TIP.2010.2090533. Google Scholar [4] G. Bellettini, V. Caselles and M. Novaga, The total variation flow in $\mathbb{R}^n$, J. Differ. Equations, 184 (2002), 475-525. doi: 10.1006/jdeq.2001.4150. Google Scholar [5] C. Brito-Loeza and K. Chen, Multigrid algorithm for high order denoising, SIAM J. Imaging. Sciences, 3 (2010), 363-389. doi: 10.1137/080737903. Google Scholar [6] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vis., 20 (2004), 89-97. doi: 10.1023/B:JMIV.0000011320.81911.38. Google Scholar [7] T. Chan, G. H. Golub and P. Mulet, A nonlinear primal-dual method for total variation-based image restoration, SIAM J. Sci. Comput., 20 (1999), 1964-1977. doi: 10.1137/S1064827596299767. Google Scholar [8] T. Chan and S. Esedoglu, Aspects of total variation regularized L1 function approximation, SIAM J. Appl. Math., 65 (2005), 1817-1837. doi: 10.1137/040604297. Google Scholar [9] T. Chan, S. Esedoḡlu, F. Park and M. H. Yip, Recent Developments in Total Variation Image Restoration, In Handbook of Mathematical Models in Computer Vision. Springer Verlag, 2005. Edt by N. Paragios, Y. Chen, O. Faugeras.Google Scholar [10] T. Chan, S. H. Kang and J. H. Shen, Euler's elastica and curvature based inpaintings, SIAM J. Appl. Math., 63 (2002), 564-592. doi: 10.1137/S0036139901390088. Google Scholar [11] M. P. do Carmo, Differential Geometry of Curves and Surfaces, Prentice-Hall, Inc., 1976. Google Scholar [12] Y. Duan, Y. Wang, X.-C. Tai and J. Hahn, A fast augmented Lagrangian method for Euler's elastica model, SSVM 2011, LSCS, 6667 (2012), 144-156. Google Scholar [13] J. Eckstein and W. Yao, Understanding the Convergence of the Alternating Direction Method of Multipliers: Theoretical and Computational Perspectives, Pac. J. Optim., 11 (2015), 619-644. Google Scholar [14] N. El-Zehiry and L. Grady, Fast global optimization of curvature, Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010), 3257-3264. Google Scholar [15] T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343. doi: 10.1137/080725891. Google Scholar [16] M. Hintermüller, C. N. Rautenberg and J. Hahn, Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction, Inverse Problems, 30 (2014), 055014, 34pp. doi: 10.1088/0266-5611/30/5/055014. Google Scholar [17] R. March and M. Dozio, A variational method for the recovery of smooth boundaries, Image and Vision Computing, 15 (1997), 705-712. doi: 10.1016/S0262-8856(97)00002-4. Google Scholar [18] S. Masnou, Disocclusion: A variational approach using level lines, IEEE Trans. Image Process., 11 (2002), 68-76. doi: 10.1109/83.982815. Google Scholar [19] S. Masnou and J. M. Morel, Level lines based disocclusion, Proc. IEEE Int. Conf. on Image Processing, Chicago, IL, (1998), 259-263.Google Scholar [20] Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, University Lecture Series, Vol 22, American Mathematical Society, Providence, RI, 2001. doi: 10.1090/ulect/022. Google Scholar [21] M. Myllykoski, R. Glowinski, T. Kárkkáinen and T. Rossi, A new augmented Lagrangian approach for L1-mean curvature image denoising, SIAM J. Imaging Sci., 8 (2015), 95-125. doi: 10.1137/140962164. Google Scholar [22] M. Nitzberg, D. Mumford and T. Shiota, Filering, Segmentation, and Depth, Lecture Notes in Computer Science, Vol. 662, Springer Verlag, Berlin, 1993. doi: 10.1007/3-540-56484-5. Google Scholar [23] S. Osher, M. Burger, D. Goldfarb, J. J. Xu and W. T. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), 460-489. doi: 10.1137/040605412. Google Scholar [24] R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Mathematics of Operations Research, 1 (1976), 97-116. doi: 10.1287/moor.1.2.97. Google Scholar [25] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithm, Physica D, 60 (1992), 259-268. doi: 10.1016/0167-2789(92)90242-F. Google Scholar [26] T. Schoenemann, F. Kahl and D. Cremers, Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation, IEEE International Conference on Computer Vision (ICCV), 2009. doi: 10.1109/ICCV.2009.5459209. Google Scholar [27] T. Schoenemann, F. Kahl, S. Masnou and D. Cremers, A linear framework for region-based image segmentation and inpainting involving curvature penalization, Int. J. Comput. Vision, 99 (2012), 53-68. doi: 10.1007/s11263-012-0518-7. Google Scholar [28] D. Strong and T. Chan, Edge-preserving and scale-dependent properties of total variation regularization, Inverse Problem, 19 (2003), 165-187. doi: 10.1088/0266-5611/19/6/059. Google Scholar [29] X. C. Tai, J. Hahn and G. J. Chung, A fast algorithm for Euler's Elastica model using augmented Lagrangian method, SIAM J. Imaging Sciences, 4 (2011), 313-344. doi: 10.1137/100803730. Google Scholar [30] C. Wu and X. C. Tai, Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, Vectorial TV, and high order models, SIAM J. Imaging Sciences, 3 (2010), 300-339. doi: 10.1137/090767558. Google Scholar [31] F. Yang, K. Chen and B. Yu, Homotopy method for a mean curvature-based denoising model, Appl. Numer. Math., 62 (2012), 185-200. doi: 10.1016/j.apnum.2011.12.001. Google Scholar [32] W. Zhu and T. Chan, A variational model for capturing illusory contours using curvature,, J. Math. Imaging Vision, 27 (2007), 29-40. doi: 10.1007/s10851-006-9695-8. Google Scholar [33] W. Zhu and T. Chan, Image denoising using mean curvature of image surface,, SIAM J. Imaging Sciences, 5 (2012), 1-32. doi: 10.1137/110822268. Google Scholar [34] W. Zhu, T. Chan and S. Esedoḡlu, Segmentation with depth: A level set approach, SIAM J. Sci. Comput., 28 (2006), 1957-1973. doi: 10.1137/050622213. Google Scholar [35] W. Zhu, X. C. Tai and T. Chan, Augmented Lagrangian method for a mean curvature based image denoising model, Inverse Probl. Imag., 7 (2013), 1409-1432. doi: 10.3934/ipi.2013.7.1409. Google Scholar [36] W. Zhu, X. C. Tai and T. Chan, Image segmentation using Euler's elastica as the regularization, J. Sci. Comput., 57 (2013), 414-438. doi: 10.1007/s10915-013-9710-3. Google Scholar
The first row lists the noise-free image $f_{128}$ and the obtained clean one $u_{128}$. The second row presents the difference images $u_{N}-f_{N}$ with $N=32, 64,128$ from the left to the right, respectively
From the left to right, the original "Matlab Logo" $f$, the obtained clean image $u$, and the difference $u-f$ are listed respectively. The parameters for these experiments are given by $\lambda=1, r_{1}=r_{2}=10$
The plots of the relative error of $u^{k}$ (Eq. 37), the residuals (Eq. 35), the relative errors of Lagrange multipliers (Eq. 36), and the energy $E(u^{k})$. The left column is for the experiment in Fig. 1 with $N=64$ while the right one for the example "'Matlab Logo'. Note that for each experiment, one of the relative errors or the residuals is around $1e-16$, the precision of floating numbers in Matlab, which demonstrates that a minimizer of the MC denoising model is well approached
The original image $f$ and three outputs $u_{i}'s, i=1, 2, 3$ for different regularization parameters $\lambda=10,1000,$ and $2400$ respectively. In these experiments, $h=1$, $r_{1}=r_{2}=\lambda$
Three outputs $u_{i}'s, i=1, 2, 3$ for different spatial sizes $h=1, 0.25,$ and $0.2$ respectively with the same input image $f$ as in Fig. 4. In these experiments, $\lambda=100$, $r_{1}=r_{2}=\lambda/h$
The first row lists the original noisy "Cameraman" (SNR=8.93), an intermediate result for iteration number=5e+3, and the final clean one for iteration number=1e+4. The second row displays the relative error of $u^{k}$ (Eq. 37) and the relative residuals (Eq. 35), and the energy $E(u^{k})$ versus iterations. The parameters for this experiment are $\lambda=1e+4, r_{10}=500, r_{20}=1.5e+4$
The comparison of the results using $L^{1}$-and $L^{2}$-norms of mean curvature as regularizers from left to right, respectively. The parameters for the $L^{2}$-norm based MC denoising model are $h=1, \lambda=r_{1}=r_{2}=300$
Augmented Lagrangian method for the MC denoising model
 1.Initialization: $u^{0}$, $q^{0}$, $\mathbf{n}^{0}$, and $\lambda_{1}^{0}$, ${\boldsymbol{\lambda}}_{2}^{0}$. For $k \geq 1$, do the following steps (Step $2\sim 4$): 2.Compute an approximate minimizer $(u^{k}, q^{k}, \mathbf{n}^{k})$ of the augmented Lagrangian functional with the fixed Lagrangian multiplier $\lambda_{1}^{k-1}$, ${\boldsymbol{\lambda}}_{2}^{k-1}$: $(u^{k}, q^{k}, \mathbf{n}^{k}) \approx \mbox{argmin } \mathcal{L}(u, q, \mathbf{n}; \lambda_{1}^{k-1}, {\boldsymbol{\lambda}}_{2}^{k-1}).$ 3. Update the Lagrangian multipliers $\lambda_{1}^{k} = \lambda_{1}^{k-1}+r_{1}(q^{k}-\nabla\cdot \mathbf{n}^{k})$ ${\boldsymbol{\lambda}}_{2}^{k} = {\boldsymbol{\lambda}}_{2}^{k-1}+r_{2}\left(\frac{\nabla u^{k}}{\sqrt{1+|\nabla u^{k}|^{2}}}-\mathbf{n}^{k}\right),$ 4. Measure the relative residuals and stop the iteration if they are smaller than a threshold $\epsilon_{r}$.
 1.Initialization: $u^{0}$, $q^{0}$, $\mathbf{n}^{0}$, and $\lambda_{1}^{0}$, ${\boldsymbol{\lambda}}_{2}^{0}$. For $k \geq 1$, do the following steps (Step $2\sim 4$): 2.Compute an approximate minimizer $(u^{k}, q^{k}, \mathbf{n}^{k})$ of the augmented Lagrangian functional with the fixed Lagrangian multiplier $\lambda_{1}^{k-1}$, ${\boldsymbol{\lambda}}_{2}^{k-1}$: $(u^{k}, q^{k}, \mathbf{n}^{k}) \approx \mbox{argmin } \mathcal{L}(u, q, \mathbf{n}; \lambda_{1}^{k-1}, {\boldsymbol{\lambda}}_{2}^{k-1}).$ 3. Update the Lagrangian multipliers $\lambda_{1}^{k} = \lambda_{1}^{k-1}+r_{1}(q^{k}-\nabla\cdot \mathbf{n}^{k})$ ${\boldsymbol{\lambda}}_{2}^{k} = {\boldsymbol{\lambda}}_{2}^{k-1}+r_{2}\left(\frac{\nabla u^{k}}{\sqrt{1+|\nabla u^{k}|^{2}}}-\mathbf{n}^{k}\right),$ 4. Measure the relative residuals and stop the iteration if they are smaller than a threshold $\epsilon_{r}$.
Alternating minimization method for solving the subproblems
 1. Initialization: $\widetilde{u}^{0}=u^{k-1}$, $\widetilde{q}^{0}=q^{k-1}$, and $\widetilde{\mathbf{n}}^{0}=\mathbf{n}^{k-1}$. 2. For fixed Lagrangian multipliers $\lambda_{1}=\lambda_{1}^{k-1}$ and ${\boldsymbol{\lambda}}_{2}={\boldsymbol{\lambda}}_{2}^{k-1}$, solve the following subproblems : $\widetilde{u}^{1} = \mbox{argmin } \mathcal{L}(u, \widetilde{q}^{0}, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2})$ $\widetilde{q}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, q, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2})$ $\widetilde{\mathbf{n}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, \widetilde{q}^{1}, \mathbf{n}, \lambda_{1}, {\boldsymbol{\lambda}}_{2})$ 3. $(u^{k}, q^{k}, \mathbf{n}^{k})=(\widetilde{u}^{1}, \widetilde{q}^{1}, \widetilde{\mathbf{n}}^{1})$.
 1. Initialization: $\widetilde{u}^{0}=u^{k-1}$, $\widetilde{q}^{0}=q^{k-1}$, and $\widetilde{\mathbf{n}}^{0}=\mathbf{n}^{k-1}$. 2. For fixed Lagrangian multipliers $\lambda_{1}=\lambda_{1}^{k-1}$ and ${\boldsymbol{\lambda}}_{2}={\boldsymbol{\lambda}}_{2}^{k-1}$, solve the following subproblems : $\widetilde{u}^{1} = \mbox{argmin } \mathcal{L}(u, \widetilde{q}^{0}, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2})$ $\widetilde{q}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, q, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2})$ $\widetilde{\mathbf{n}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, \widetilde{q}^{1}, \mathbf{n}, \lambda_{1}, {\boldsymbol{\lambda}}_{2})$ 3. $(u^{k}, q^{k}, \mathbf{n}^{k})=(\widetilde{u}^{1}, \widetilde{q}^{1}, \widetilde{\mathbf{n}}^{1})$.
The $L^{1}$-norm of the difference $u_{N}-f_{N}$ and parameters for experiments in Fig. 1
 $N$ $||u_{N}-f_{N}||_{L^{1}}$ $r_{1}$ $r_{2}$ 32 $1.56e-2$ 2e-3 1e-1 64 $1.01e-2$ 2e-3 1e-1 128 $6.53e-3$ 1e-4 1e-1
 $N$ $||u_{N}-f_{N}||_{L^{1}}$ $r_{1}$ $r_{2}$ 32 $1.56e-2$ 2e-3 1e-1 64 $1.01e-2$ 2e-3 1e-1 128 $6.53e-3$ 1e-4 1e-1
 [1] Wei Zhu, Xue-Cheng Tai, Tony Chan. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Problems & Imaging, 2013, 7 (4) : 1409-1432. doi: 10.3934/ipi.2013.7.1409 [2] Anis Theljani, Ke Chen. An augmented lagrangian method for solving a new variational model based on gradients similarity measures and high order regulariation for multimodality registration. Inverse Problems & Imaging, 2019, 13 (2) : 309-335. doi: 10.3934/ipi.2019016 [3] Fangfang Dong, Yunmei Chen. A fractional-order derivative based variational framework for image denoising. Inverse Problems & Imaging, 2016, 10 (1) : 27-50. doi: 10.3934/ipi.2016.10.27 [4] Guoshan Zhang, Peizhao Yu. Lyapunov method for stability of descriptor second-order and high-order systems. Journal of Industrial & Management Optimization, 2018, 14 (2) : 673-686. doi: 10.3934/jimo.2017068 [5] Xi-Hong Yan. A new convergence proof of augmented Lagrangian-based method with full Jacobian decomposition for structured variational inequalities. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 45-54. doi: 10.3934/naco.2016.6.45 [6] Zheng Sun, José A. Carrillo, Chi-Wang Shu. An entropy stable high-order discontinuous Galerkin method for cross-diffusion gradient flow systems. Kinetic & Related Models, 2019, 12 (4) : 885-908. doi: 10.3934/krm.2019033 [7] Egil Bae, Xue-Cheng Tai, Wei Zhu. Augmented Lagrangian method for an Euler's elastica based segmentation model that promotes convex contours. Inverse Problems & Imaging, 2017, 11 (1) : 1-23. doi: 10.3934/ipi.2017001 [8] Lela Dorel. Glucose level regulation via integral high-order sliding modes. Mathematical Biosciences & Engineering, 2011, 8 (2) : 549-560. doi: 10.3934/mbe.2011.8.549 [9] Chunlin Wu, Juyong Zhang, Xue-Cheng Tai. Augmented Lagrangian method for total variation restoration with non-quadratic fidelity. Inverse Problems & Imaging, 2011, 5 (1) : 237-261. doi: 10.3934/ipi.2011.5.237 [10] Xueyong Wang, Yiju Wang, Gang Wang. An accelerated augmented Lagrangian method for multi-criteria optimization problem. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-9. doi: 10.3934/jimo.2018136 [11] Li Jin, Hongying Huang. Differential equation method based on approximate augmented Lagrangian for nonlinear programming. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-15. doi: 10.3934/jimo.2019053 [12] Feishe Chen, Lixin Shen, Yuesheng Xu, Xueying Zeng. The Moreau envelope approach for the L1/TV image denoising model. Inverse Problems & Imaging, 2014, 8 (1) : 53-77. doi: 10.3934/ipi.2014.8.53 [13] Huan Han. A variational model with fractional-order regularization term arising in registration of diffusion tensor image. Inverse Problems & Imaging, 2018, 12 (6) : 1263-1291. doi: 10.3934/ipi.2018053 [14] Seung-Yeal Ha, Jeongho Kim, Jinyeong Park, Xiongtao Zhang. Uniform stability and mean-field limit for the augmented Kuramoto model. Networks & Heterogeneous Media, 2018, 13 (2) : 297-322. doi: 10.3934/nhm.2018013 [15] Marc Wolff, Stéphane Jaouen, Hervé Jourdren, Eric Sonnendrücker. High-order dimensionally split Lagrange-remap schemes for ideal magnetohydrodynamics. Discrete & Continuous Dynamical Systems - S, 2012, 5 (2) : 345-367. doi: 10.3934/dcdss.2012.5.345 [16] Raymond H. Chan, Haixia Liang, Suhua Wei, Mila Nikolova, Xue-Cheng Tai. High-order total variation regularization approach for axially symmetric object tomography from a single radiograph. Inverse Problems & Imaging, 2015, 9 (1) : 55-77. doi: 10.3934/ipi.2015.9.55 [17] Marc Bonnet. Inverse acoustic scattering using high-order small-inclusion expansion of misfit function. Inverse Problems & Imaging, 2018, 12 (4) : 921-953. doi: 10.3934/ipi.2018039 [18] Tarek Saanouni. Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity. Communications on Pure & Applied Analysis, 2014, 13 (1) : 273-291. doi: 10.3934/cpaa.2014.13.273 [19] Phillip Colella. High-order finite-volume methods on locally-structured grids. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4247-4270. doi: 10.3934/dcds.2016.36.4247 [20] Andrey B. Muravnik. On the Cauchy problem for differential-difference parabolic equations with high-order nonlocal terms of general kind. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 541-561. doi: 10.3934/dcds.2006.16.541

2018 Impact Factor: 1.469