# American Institute of Mathematical Sciences

January  2017, 11(1): 1-23. doi: 10.3934/ipi.2017001

## Augmented Lagrangian method for an Euler's elastica based segmentation model that promotes convex contours

 1 Norwegian Defence Research Establishment, P.O. Box 25, NO-2027 Kjeller, Norway 2 Department of Mathematics, University of Bergen, P.O. Box 7803, NO-5020 Bergen, Norway 3 Department of Mathematics, University of Alabama, Box 870350, Tuscaloosa, AL 35487, USA

* Corresponding author: Wei Zhu

Received  September 2015 Revised  April 2016 Published  January 2017

Fund Project: The first author is supported by the Norwegian Research Council eVita project 214889

In this paper, we propose an image segmentation model where an $L^1$ variant of the Euler's elastica energy is used as boundary regularization. An interesting feature of this model lies in its preference for convex segmentation contours. However, due to the high order and non-differentiability of Euler's elastica energy, it is nontrivial to minimize the associated functional. As in recent work on the ordinary $L^2$-Euler's elastica model in imaging, we propose using an augmented Lagrangian method to tackle the minimization problem. Specifically, we design a novel augmented Lagrangian functional that deals with the mean curvature term differently than in previous works. The new treatment reduces the number of Lagrange multipliers employed, and more importantly, it helps represent the curvature more effectively and faithfully. Numerical experiments validate the efficiency of the proposed augmented Lagrangian method and also demonstrate new features of this particular segmentation model, such as shape driven and data driven properties.

Citation: Egil Bae, Xue-Cheng Tai, Wei Zhu. Augmented Lagrangian method for an Euler's elastica based segmentation model that promotes convex contours. Inverse Problems & Imaging, 2017, 11 (1) : 1-23. doi: 10.3934/ipi.2017001
##### References:

show all references

##### References:
Two vectors $\nu$, $\mathbf{a}$ and a possible unit vector $\mathbf{b}$ that maximizes the function $\psi(\mathbf{b})$.
The original image (the first row) and the segmentation results (the second row) with different curvature parameter $b$. Two features can be observed from these results: 1) data driven property: as the parameter $b$ increases, objects of relatively small size will be omitted in the final segmentation; 2) shape driven property: with the same parameter $b$, among those objects with equal areas, the one without a convex shape is taken out from the final segmentation (see the staircase-like shape). In this experiment, the other parameters are: $a=10^{-4},r_{1}=50,r_{2}=20,r_{3}=5$
The original image and the segmentation result. The result demonstrates that the invisible part of the hat can be restored by the proposed segmentation model with a relatively large curvature parameter $b$. In this experiment, the parameters are $a=10^{-3},b=80, r_{1}=60,r_{2}=40,r_{3}=10$
The original image and the segmentation result. The result shows that only the ''cap" part of the mushroom is captured while the ''stem" is omitted in order to form a convex segmentation contour by the proposed segmentation model with a relatively large curvature parameter $b$. In this experiment, the parameters are $a=10^{-4},b=18, r_{1}=20,r_{2}=10,r_{3}=5$
The effect of the curvature parameter $b$ on the final segmentation results. One may observe that the smaller the parameter $b$ is chosen, the more salient the indentation of the hat would be. In these two experiments, all the other parameters are the same as those in Fig. 3
The effect of the curvature parameter $b$ on the final segmentation results. One can see that once the parameter $b$ is chosen smaller, more part of the stem will be allowed in the final segmentation. In these two experiments, all the other parameters are the same as those in Fig. 4
The plots of relative residuals (Eq. 43), relative errors in Lagrange multipliers (Eq. 44), and relative error in $u^{k}$ (Eq. 45) for the two examples ''hat" (Left column) and ''mushroom" (Right column). These plots demonstrate the convergence of the iteration process
Augmented Lagrangian method for the $ECV$-$L^{1}$ segmentation model
 1. Initialization: $\phi^{0}$, $q^{0}$, $\mathbf{p}^{0}$, $\mathbf{n}^{0}$, and $\mathbf{\lambda}_{1}^{0}$, $\lambda_{2}^{0}$, $\mathbf{\lambda}_{3}^{0}$. For $k \geq 1$, do the following steps (Step $2\sim 4$): 2. Compute an approximate minimizer $(\phi^{k},q^{k},\mathbf{p}^{k},\mathbf{n}^{k})$ of the augmented Lagrangian functional with the fixed Lagrangian multiplier $\mathbf{\lambda}_{1}^{k-1}$, $\lambda_{2}^{k-1}$, $\mathbf{\lambda}_{3}^{k-1}$: $(32)\;\;\;\;\;\;\;\;\;\;(\phi^{k},q^{k},\mathbf{p}^{k},\mathbf{n}^{k}) \approx \mbox{argmin } \mathcal{L}(\phi,q,\mathbf{p},\mathbf{n}, \mathbf{\lambda}_{1}^{k-1}, \lambda_{2}^{k-1}, \mathbf{\lambda}_{3}^{k-1}).$ 3. Update the Lagrangian multipliers $\mathbf{\lambda}_{1}^{k} = \mathbf{\lambda}_{1}^{k-1}+r_{1}(\mathbf{p}^{k}-\nabla \phi^{k})$ $\lambda_{2}^{k} = \lambda_{2}^{k-1}+r_{2}(q^{k}-\nabla\cdot \mathbf{n}^{k})$ $\mathbf{\lambda}_{3}^{k} = \mathbf{\lambda}_{3}^{k-1}+r_{3}(|\mathbf{p}|\mathbf{n}^{k}-\mathbf{p}^{k}),$ 4. Measure the relative residuals and stop the iteration if they are smaller than a threshold $\epsilon_{r}$.
 1. Initialization: $\phi^{0}$, $q^{0}$, $\mathbf{p}^{0}$, $\mathbf{n}^{0}$, and $\mathbf{\lambda}_{1}^{0}$, $\lambda_{2}^{0}$, $\mathbf{\lambda}_{3}^{0}$. For $k \geq 1$, do the following steps (Step $2\sim 4$): 2. Compute an approximate minimizer $(\phi^{k},q^{k},\mathbf{p}^{k},\mathbf{n}^{k})$ of the augmented Lagrangian functional with the fixed Lagrangian multiplier $\mathbf{\lambda}_{1}^{k-1}$, $\lambda_{2}^{k-1}$, $\mathbf{\lambda}_{3}^{k-1}$: $(32)\;\;\;\;\;\;\;\;\;\;(\phi^{k},q^{k},\mathbf{p}^{k},\mathbf{n}^{k}) \approx \mbox{argmin } \mathcal{L}(\phi,q,\mathbf{p},\mathbf{n}, \mathbf{\lambda}_{1}^{k-1}, \lambda_{2}^{k-1}, \mathbf{\lambda}_{3}^{k-1}).$ 3. Update the Lagrangian multipliers $\mathbf{\lambda}_{1}^{k} = \mathbf{\lambda}_{1}^{k-1}+r_{1}(\mathbf{p}^{k}-\nabla \phi^{k})$ $\lambda_{2}^{k} = \lambda_{2}^{k-1}+r_{2}(q^{k}-\nabla\cdot \mathbf{n}^{k})$ $\mathbf{\lambda}_{3}^{k} = \mathbf{\lambda}_{3}^{k-1}+r_{3}(|\mathbf{p}|\mathbf{n}^{k}-\mathbf{p}^{k}),$ 4. Measure the relative residuals and stop the iteration if they are smaller than a threshold $\epsilon_{r}$.
Alternating minimization method for solving the subproblems.
 1. Initialization: $\widetilde{\phi}^{0}=\phi^{k-1}$, $\widetilde{q}^{0}=q^{k-1}$, $\widetilde{\mathbf{p}}^{0}=\mathbf{p}^{k-1}$, and $\widetilde{\mathbf{n}}^{0}=\mathbf{n}^{k-1}$. 2. For fixed Lagrangian multiplier $\mathbf{\lambda}_{1}=\mathbf{\lambda}_{1}^{k-1}$, $\lambda_{2}=\lambda_{2}^{k-1}$, and $\mathbf{\lambda}_{3}=\mathbf{\lambda}_{3}^{k-1}$, solve the following subproblems : $(33)\;\;\;\;\;\;\;\;\;\;\;\;\widetilde{\phi}^{1} = \mbox{argmin } \mathcal{L}(\phi,\widetilde{q}^{0},\widetilde{\mathbf{p}}^{0},\widetilde{\mathbf{n}}^{0}, \mathbf{\lambda}_{1}, \lambda_{2}, \mathbf{\lambda}_{3})$ $(34)\;\;\;\;\;\;\;\;\;\;\;\;\widetilde{q}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{\phi}^{1},q,\widetilde{\mathbf{p}}^{0},\widetilde{\mathbf{n}}^{0}, \mathbf{\lambda}_{1}, \lambda_{2}, \mathbf{\lambda}_{3})$ $(35)\;\;\;\;\;\;\;\;\;\;\;\;\widetilde{\mathbf{p}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{\phi}^{1},\widetilde{q}^{1},\mathbf{p},\widetilde{\mathbf{n}}^{0}, , \mathbf{\lambda}_{1}, \lambda_{2}, \mathbf{\lambda}_{3})$ $(36)\;\;\;\;\;\;\;\;\;\;\;\;\widetilde{\mathbf{n}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{\phi}^{1},\widetilde{q}^{1},\widetilde{\mathbf{p}}^{1},\mathbf{n}, \mathbf{\lambda}_{1}, \lambda_{2}, \mathbf{\lambda}_{3})$ 3. $(\phi^{k},q^{k},\mathbf{p}^{k},\mathbf{n}^{k})=(\widetilde{\phi}^{1},\widetilde{q}^{1},\widetilde{\mathbf{p}}^{1}, \widetilde{\mathbf{n}}^{1})$.
 1. Initialization: $\widetilde{\phi}^{0}=\phi^{k-1}$, $\widetilde{q}^{0}=q^{k-1}$, $\widetilde{\mathbf{p}}^{0}=\mathbf{p}^{k-1}$, and $\widetilde{\mathbf{n}}^{0}=\mathbf{n}^{k-1}$. 2. For fixed Lagrangian multiplier $\mathbf{\lambda}_{1}=\mathbf{\lambda}_{1}^{k-1}$, $\lambda_{2}=\lambda_{2}^{k-1}$, and $\mathbf{\lambda}_{3}=\mathbf{\lambda}_{3}^{k-1}$, solve the following subproblems : $(33)\;\;\;\;\;\;\;\;\;\;\;\;\widetilde{\phi}^{1} = \mbox{argmin } \mathcal{L}(\phi,\widetilde{q}^{0},\widetilde{\mathbf{p}}^{0},\widetilde{\mathbf{n}}^{0}, \mathbf{\lambda}_{1}, \lambda_{2}, \mathbf{\lambda}_{3})$ $(34)\;\;\;\;\;\;\;\;\;\;\;\;\widetilde{q}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{\phi}^{1},q,\widetilde{\mathbf{p}}^{0},\widetilde{\mathbf{n}}^{0}, \mathbf{\lambda}_{1}, \lambda_{2}, \mathbf{\lambda}_{3})$ $(35)\;\;\;\;\;\;\;\;\;\;\;\;\widetilde{\mathbf{p}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{\phi}^{1},\widetilde{q}^{1},\mathbf{p},\widetilde{\mathbf{n}}^{0}, , \mathbf{\lambda}_{1}, \lambda_{2}, \mathbf{\lambda}_{3})$ $(36)\;\;\;\;\;\;\;\;\;\;\;\;\widetilde{\mathbf{n}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{\phi}^{1},\widetilde{q}^{1},\widetilde{\mathbf{p}}^{1},\mathbf{n}, \mathbf{\lambda}_{1}, \lambda_{2}, \mathbf{\lambda}_{3})$ 3. $(\phi^{k},q^{k},\mathbf{p}^{k},\mathbf{n}^{k})=(\widetilde{\phi}^{1},\widetilde{q}^{1},\widetilde{\mathbf{p}}^{1}, \widetilde{\mathbf{n}}^{1})$.
The presentation of the sizes, the numbers of iterations, and the computational time for the two real images.
 Image Size of image Number of iterations Time (sec) Fig. 3 $321\times 481$ 1600 472.3 Fig. 4 $481\times 321$ 1600 480.8
 Image Size of image Number of iterations Time (sec) Fig. 3 $321\times 481$ 1600 472.3 Fig. 4 $481\times 321$ 1600 480.8
 [1] Wei Zhu, Xue-Cheng Tai, Tony Chan. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Problems & Imaging, 2013, 7 (4) : 1409-1432. doi: 10.3934/ipi.2013.7.1409 [2] Anis Theljani, Ke Chen. An augmented lagrangian method for solving a new variational model based on gradients similarity measures and high order regulariation for multimodality registration. Inverse Problems & Imaging, 2019, 13 (2) : 309-335. doi: 10.3934/ipi.2019016 [3] Jianping Zhang, Ke Chen, Bo Yu, Derek A. Gould. A local information based variational model for selective image segmentation. Inverse Problems & Imaging, 2014, 8 (1) : 293-320. doi: 10.3934/ipi.2014.8.293 [4] Xi-Hong Yan. A new convergence proof of augmented Lagrangian-based method with full Jacobian decomposition for structured variational inequalities. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 45-54. doi: 10.3934/naco.2016.6.45 [5] Wei Zhu. A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method. Inverse Problems & Imaging, 2017, 11 (6) : 975-996. doi: 10.3934/ipi.2017045 [6] Ruiliang Zhang, Xavier Bresson, Tony F. Chan, Xue-Cheng Tai. Four color theorem and convex relaxation for image segmentation with any number of regions. Inverse Problems & Imaging, 2013, 7 (3) : 1099-1113. doi: 10.3934/ipi.2013.7.1099 [7] Chunlin Wu, Juyong Zhang, Xue-Cheng Tai. Augmented Lagrangian method for total variation restoration with non-quadratic fidelity. Inverse Problems & Imaging, 2011, 5 (1) : 237-261. doi: 10.3934/ipi.2011.5.237 [8] Xueyong Wang, Yiju Wang, Gang Wang. An accelerated augmented Lagrangian method for multi-criteria optimization problem. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-9. doi: 10.3934/jimo.2018136 [9] Li Jin, Hongying Huang. Differential equation method based on approximate augmented Lagrangian for nonlinear programming. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-15. doi: 10.3934/jimo.2019053 [10] Shi Yan, Jun Liu, Haiyang Huang, Xue-Cheng Tai. A dual EM algorithm for TV regularized Gaussian mixture model in image segmentation. Inverse Problems & Imaging, 2019, 13 (3) : 653-677. doi: 10.3934/ipi.2019030 [11] Yuan Shen, Wenxing Zhang, Bingsheng He. Relaxed augmented Lagrangian-based proximal point algorithms for convex optimization with linear constraints. Journal of Industrial & Management Optimization, 2014, 10 (3) : 743-759. doi: 10.3934/jimo.2014.10.743 [12] Xiantao Xiao, Liwei Zhang, Jianzhong Zhang. On convergence of augmented Lagrangian method for inverse semi-definite quadratic programming problems. Journal of Industrial & Management Optimization, 2009, 5 (2) : 319-339. doi: 10.3934/jimo.2009.5.319 [13] Xihong Yan. An augmented Lagrangian-based parallel splitting method for a one-leader-two-follower game. Journal of Industrial & Management Optimization, 2016, 12 (3) : 879-890. doi: 10.3934/jimo.2016.12.879 [14] Qingsong Duan, Mengwei Xu, Yue Lu, Liwei Zhang. A smoothing augmented Lagrangian method for nonconvex, nonsmooth constrained programs and its applications to bilevel problems. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1241-1261. doi: 10.3934/jimo.2018094 [15] Wei Wang, Na Sun, Michael K. Ng. A variational gamma correction model for image contrast enhancement. Inverse Problems & Imaging, 2019, 13 (3) : 461-478. doi: 10.3934/ipi.2019023 [16] Baoli Shi, Zhi-Feng Pang, Jing Xu. Image segmentation based on the hybrid total variation model and the K-means clustering strategy. Inverse Problems & Imaging, 2016, 10 (3) : 807-828. doi: 10.3934/ipi.2016022 [17] Micol Amar, Andrea Braides. A characterization of variational convergence for segmentation problems. Discrete & Continuous Dynamical Systems - A, 1995, 1 (3) : 347-369. doi: 10.3934/dcds.1995.1.347 [18] Chunrong Chen, T. C. Edwin Cheng, Shengji Li, Xiaoqi Yang. Nonlinear augmented Lagrangian for nonconvex multiobjective optimization. Journal of Industrial & Management Optimization, 2011, 7 (1) : 157-174. doi: 10.3934/jimo.2011.7.157 [19] Qian Liu, Xinmin Yang, Heung Wing Joseph Lee. On saddle points of a class of augmented lagrangian functions. Journal of Industrial & Management Optimization, 2007, 3 (4) : 693-700. doi: 10.3934/jimo.2007.3.693 [20] Dominique Zosso, Jing An, James Stevick, Nicholas Takaki, Morgan Weiss, Liane S. Slaughter, Huan H. Cao, Paul S. Weiss, Andrea L. Bertozzi. Image segmentation with dynamic artifacts detection and bias correction. Inverse Problems & Imaging, 2017, 11 (3) : 577-600. doi: 10.3934/ipi.2017027

2018 Impact Factor: 1.469