# American Institute of Mathematical Sciences

• Previous Article
An adaptive dynamic programming method for torque ripple minimization of PMSM
• JIMO Home
• This Issue
• Next Article
Simulated annealing and genetic algorithm based method for a bi-level seru loading problem with worker assignment in seru production systems
March  2021, 17(2): 805-825. doi: 10.3934/jimo.2019135

## An alternating linearization bundle method for a class of nonconvex optimization problem with inexact information

 1 School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China 2 School of Information Engineering, Dalian Ocean University, Dalian 116024, China 3 School of Finance, Zhejiang University of Finance and Economics, Hangzhou 310018, China

* Corresponding author: lvjian328@163.com (Jian Lv)

Received  May 2019 Revised  June 2019 Published  March 2021 Early access  October 2019

Fund Project: The authors' work are supported by the Natural Science Foundation of China (Grant No. 11801503, 11701061), and Natural Science Foundation of ShanDong (Grant No. ZR201807061177)

We propose an alternating linearization bundle method for minimizing the sum of a nonconvex function and a convex function. The convex function is assumed to be "simple" in the sense that finding its proximal-like point is relatively easy. The nonconvex function is known through oracles which provide inexact information. The errors in function values and subgradient evaluations might be unknown, but are bounded by universal constants. We examine an alternating linearization bundle method in this setting and obtain reasonable convergence properties. Numerical results show the good performance of the method.

Citation: Hui Gao, Jian Lv, Xiaoliang Wang, Liping Pang. An alternating linearization bundle method for a class of nonconvex optimization problem with inexact information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 805-825. doi: 10.3934/jimo.2019135
##### References:

show all references

##### References:
The 3D image of the nonconvex function $f(x)$
The 3D image of the convex function $h_1(x)$
The 3D image of the convex function $h_2(x)$
 Algorithm 1 An alternating linearization bundle algorithm step 0 (Initialization) Select a starting point $y^{0}\in \mathbb{R}^{n}$ and set $x^{0}=y^{0}=z^{0}.$ A stopping tolerance tol $\geq 0$, $m\in (0,1)$, a proximal parameter $u_{0}>0$. Initialize the iteration counter $\ell=0$, the serious step counter $k=k(\ell)=0$ with $j_{0}=0$, the bundle index sets $L_{0}^{f}:=\{0\}$. Compute $f^{0}, \tilde{g}^{0}_{f} $$\in\partial f(x^{0})+B_{\varepsilon_{0}}(0) and the bundle information (e_{f}^{0,0}, d_{0}^{0},\bigtriangleup_{0}^{0})=(0,0,0) . Set s_{h}^{-1}\in\partial h(y^{0})=\partial h(x^{0}), \bar{h}_{-1}(\cdot)=h(y^{0})+$$ \langle s_{h}^{-1},\cdot-y^{0}\rangle$. step 1 (Solving the $f$-subproblem) Find $z^{\ell+1}$ by solving subproblem (17), and set \begin{align} \bar{\varphi}_{\ell}(\cdot) = \check{\varphi}_{\ell}(z^{\ell+1})+\langle s_{\varphi}^{\ell}, \cdot-z^{\ell+1}\rangle \ \ {\text{with}} \ \ s_{\varphi}^{\ell} = u_{\ell}(\hat{x}^{k}-z^{\ell+1})-s_{h}^{\ell-1}. \;\;\;\;(26)\end{align} step 2 (Solving the $h$-subproblem) Find $y^{\ell+1}$ by solving subproblem (18), and set \begin{align} \bar{h}_{\ell}(\cdot) = h(y^{\ell+1})+\langle s_{h}^{\ell}, \cdot-y^{\ell+1}\rangle \ \ {\rm{with}}\ \ s_{h}^{\ell} = u_{\ell}(\hat{x}^{k}-y^{\ell+1})-s_{\varphi}^{\ell}.\;\;\;\;(27) \end{align} Compute $\delta_{\ell}$ and the aggregate subgradient and linearization error of $F$ \begin{align} s^{\ell}: = u_{\ell}(\hat{x}^{k}-y^{\ell+1}) \ \ {\rm{with}}\ \ E_{\ell} = \hat{F}^{k}-[\bar{\varphi}_{\ell}(\hat{x}^{k})+\bar{h}_{\ell}(\hat{x}^{k})]. \;\;\;\;\;\; (28)\end{align} step 3 (Stopping criterion) Compute $f^{\ell+1}, h(y^{\ell+1}), s^{\ell}_{h}\in \partial h(y^{\ell+1})$, and \tilde{g}^{\ell+1}_{f} \in \partial f(y^{\ell+1})+B_{\varepsilon_{\ell+1}}(0). If \delta_{\ell}\leq tol, then stop. Select a new index L_{\ell+1}^{f} satisfying \begin{align} L_{\ell+1}^{f}\supseteq \bigl\{\ell+1, j_{k}\bigr\} \ \ {\rm{and}}\ \ L_{\ell+1}^{f}\supseteq \bigl\{j\in L_{\ell}^{f}:\lambda_{j}^{\ell}>0\bigr\}.\;\;\;\;\;\;\;\;(29) \end{align} step 4 (Descent test) Compute F^{\ell+1}=f^{\ell+1}+h(y^{\ell+1}) . If F^{\ell+1}\leq \hat{F}^{k}-m\delta_{\ell} , declare a descent step, set x^{k+1}=y^{\ell+1} , k(\ell+1)=k+1 . Otherwise, declare a null step, set x^{k+1}=\hat{x}^{k} , k(\ell+1)=k. step 5 (Bundle update and loop) For a proximal parameter u_{\ell} , we use a positive constant u_{\max} to update it. If the iteration \ell is a serious step, then set u_{\ell+1}\leq u_{\max} . If the iteration \ell is a null step, then set u_{\ell+1}=u_{\ell} . Select the new index set L_{\ell+1}^{f} , increase \ell by 1 and go to step 1.  Algorithm 1 An alternating linearization bundle algorithm step 0 (Initialization) Select a starting point y^{0}\in \mathbb{R}^{n} and set x^{0}=y^{0}=z^{0}. A stopping tolerance tol \geq 0 , m\in (0,1) , a proximal parameter u_{0}>0 . Initialize the iteration counter \ell=0 , the serious step counter k=k(\ell)=0 with j_{0}=0 , the bundle index sets L_{0}^{f}:=\{0\} . Compute f^{0}, \tilde{g}^{0}_{f} \in\partial f(x^{0})+B_{\varepsilon_{0}}(0) and the bundle information $(e_{f}^{0,0}, d_{0}^{0},\bigtriangleup_{0}^{0})=(0,0,0)$. Set $s_{h}^{-1}\in\partial h(y^{0})=\partial h(x^{0}),$ \bar{h}_{-1}(\cdot)=h(y^{0})+ \langle s_{h}^{-1},\cdot-y^{0}\rangle . step 1 (Solving the f -subproblem) Find z^{\ell+1} by solving subproblem (17), and set \begin{align} \bar{\varphi}_{\ell}(\cdot) = \check{\varphi}_{\ell}(z^{\ell+1})+\langle s_{\varphi}^{\ell}, \cdot-z^{\ell+1}\rangle \ \ {\text{with}} \ \ s_{\varphi}^{\ell} = u_{\ell}(\hat{x}^{k}-z^{\ell+1})-s_{h}^{\ell-1}. \;\;\;\;(26)\end{align} step 2 (Solving the h -subproblem) Find y^{\ell+1} by solving subproblem (18), and set \begin{align} \bar{h}_{\ell}(\cdot) = h(y^{\ell+1})+\langle s_{h}^{\ell}, \cdot-y^{\ell+1}\rangle \ \ {\rm{with}}\ \ s_{h}^{\ell} = u_{\ell}(\hat{x}^{k}-y^{\ell+1})-s_{\varphi}^{\ell}.\;\;\;\;(27) \end{align} Compute \delta_{\ell} and the aggregate subgradient and linearization error of F \begin{align} s^{\ell}: = u_{\ell}(\hat{x}^{k}-y^{\ell+1}) \ \ {\rm{with}}\ \ E_{\ell} = \hat{F}^{k}-[\bar{\varphi}_{\ell}(\hat{x}^{k})+\bar{h}_{\ell}(\hat{x}^{k})]. \;\;\;\;\;\; (28)\end{align} step 3 (Stopping criterion) Compute f^{\ell+1}, h(y^{\ell+1}), s^{\ell}_{h}\in \partial h(y^{\ell+1}) , and \tilde{g}^{\ell+1}_{f} \in \partial f(y^{\ell+1})+B_{\varepsilon_{\ell+1}}(0). If $\delta_{\ell}\leq$tol, then stop. Select a new index $L_{\ell+1}^{f}$ satisfying \begin{align} L_{\ell+1}^{f}\supseteq \bigl\{\ell+1, j_{k}\bigr\} \ \ {\rm{and}}\ \ L_{\ell+1}^{f}\supseteq \bigl\{j\in L_{\ell}^{f}:\lambda_{j}^{\ell}>0\bigr\}.\;\;\;\;\;\;\;\;(29) \end{align} step 4 (Descent test) Compute $F^{\ell+1}=f^{\ell+1}+h(y^{\ell+1})$. If $F^{\ell+1}\leq \hat{F}^{k}-m\delta_{\ell}$, declare a descent step, set $x^{k+1}=y^{\ell+1}$, $k(\ell+1)=k+1$. Otherwise, declare a null step, set $x^{k+1}=\hat{x}^{k}$, $k(\ell+1)=k.$ step 5 (Bundle update and loop) For a proximal parameter $u_{\ell}$, we use a positive constant $u_{\max}$ to update it. If the iteration $\ell$ is a serious step, then set $u_{\ell+1}\leq u_{\max}$. If the iteration $\ell$ is a null step, then set $u_{\ell+1}=u_{\ell}$. Select the new index set $L_{\ell+1}^{f}$, increase $\ell$ by 1 and go to step 1.
Comparison between Algorithm 1, $\texttt{PPBM}$ and $\texttt{ALBM}$ for Example 5.1
 $\texttt{Problem}$ $\texttt{Alg.}$ $\texttt{n}$ $F_{_{\texttt{final}}}$ $\texttt{Ni}$ $\texttt{Nd}$ $\texttt{NF}$ $\texttt{CB2}$ Alg. 1 2 3.342427 2 1 2 $\texttt{PPBM}$ 3.343146 2 1 2 $\texttt{ALBM}$ 3.343146 2 1 2 $\texttt{CB3}$ Alg. 1 2 24.477350 7 4 7 $\texttt{PPBM}$ 24.479795 9 4 9 $\texttt{ALBM}$ 24.479795 11 8 11 $\texttt{LQ}$ Alg. 1 2 -0.998473 13 10 13 $\texttt{PPBM}$ -0.999989 15 12 15 $\texttt{ALBM}$ -0.999989 18 17 18 $\texttt{Mifflin1}$ Alg. 1 2 48.153612 4 3 4 $\texttt{PPBM}$ 48.153612 3 2 3 $\texttt{ALBM}$ 48.153612 4 2 4 $\texttt{Rosen-Suzuki}$ Alg. 1 4 39.698418 6 5 6 $\texttt{PPBM}$ 39.715617 7 6 7 $\texttt{ALBM}$ 39.715617 9 8 9 $\texttt{Shor}$ Alg. 1 5 50.250278 4 3 4 $\texttt{PPBM}$ 50.250278 5 1 5 $\texttt{ALBM}$ 50.250278 4 1 4 $\texttt{MAXL}$ Alg. 1 20 0.557216 17 7 17 $\texttt{PPBM}$ 0.552786 19 6 19 $\texttt{ALBM}$ 0.552786 22 2 22
 $\texttt{Problem}$ $\texttt{Alg.}$ $\texttt{n}$ $F_{_{\texttt{final}}}$ $\texttt{Ni}$ $\texttt{Nd}$ $\texttt{NF}$ $\texttt{CB2}$ Alg. 1 2 3.342427 2 1 2 $\texttt{PPBM}$ 3.343146 2 1 2 $\texttt{ALBM}$ 3.343146 2 1 2 $\texttt{CB3}$ Alg. 1 2 24.477350 7 4 7 $\texttt{PPBM}$ 24.479795 9 4 9 $\texttt{ALBM}$ 24.479795 11 8 11 $\texttt{LQ}$ Alg. 1 2 -0.998473 13 10 13 $\texttt{PPBM}$ -0.999989 15 12 15 $\texttt{ALBM}$ -0.999989 18 17 18 $\texttt{Mifflin1}$ Alg. 1 2 48.153612 4 3 4 $\texttt{PPBM}$ 48.153612 3 2 3 $\texttt{ALBM}$ 48.153612 4 2 4 $\texttt{Rosen-Suzuki}$ Alg. 1 4 39.698418 6 5 6 $\texttt{PPBM}$ 39.715617 7 6 7 $\texttt{ALBM}$ 39.715617 9 8 9 $\texttt{Shor}$ Alg. 1 5 50.250278 4 3 4 $\texttt{PPBM}$ 50.250278 5 1 5 $\texttt{ALBM}$ 50.250278 4 1 4 $\texttt{MAXL}$ Alg. 1 20 0.557216 17 7 17 $\texttt{PPBM}$ 0.552786 19 6 19 $\texttt{ALBM}$ 0.552786 22 2 22
Comparison between Algorithm 1, $\texttt{PBM}$ and $\texttt{IPBM}$ for objective function $F_1$
 $\texttt{n}$ $\texttt{Algorithm}$ $F_{_{\texttt{final}}}$ $\texttt{Nd}$ $\texttt{Ni}$ $\texttt{Time}$ 5 Alg. 1 4.22e-007 28 28 0.5833 $\texttt{IPBM}$ 3.13e-007 30 30 0.6032 $\texttt{PBM}$ 2.37e-007 31 31 0.6817 6 Alg. 1 1.88e-007 27 28 0.5024 $\texttt{IPBM}$ 2.39e-007 27 30 0.5924 $\texttt{PBM}$ 1.43e-006 27 33 1.0164 7 Alg. 1 3.82e-005 33 36 0.5437 $\texttt{IPBM}$ 4.83e-007 32 37 0.5771 $\texttt{PBM}$ 3.34e-006 32 34 1.4873 8 Alg. 1 4.16e-005 38 41 0.8272 $\texttt{IPBM}$ 4.63e-005 37 43 0.9136 $\texttt{PBM}$ 4.25e-003 43 48 5.7492 9 Alg. 1 5.18e-005 33 36 1.1306 $\texttt{IPBM}$ 6.48e-006 37 39 1.3848 $\texttt{PBM}$ 2.41e-004 37 44 2.1447 10 Alg. 1 4.96e-005 38 43 0.9137 $\texttt{IPBM}$ 4.17e-006 37 46 0.9747 $\texttt{PBM}$ 5.43e-005 44 57 4.0254 11 Alg. 1 4.37e-005 48 55 1.0873 $\texttt{IPBM}$ 2.73e-006 47 56 1.1473 $\texttt{PBM}$ 4.67e-005 62 69 1.2734 12 Alg. 1 4.37e-006 48 65 1.2063 $\texttt{IPBM}$ 5.46e-006 51 67 1.3593 $\texttt{PBM}$ 4.73e-006 74 80 1.4932 13 Alg. 1 4.58e-005 48 68 1.1283 $\texttt{IPBM}$ 3.28e-006 54 72 1.2863 $\texttt{PBM}$ 8.53e-005 82 93 2.3602 14 Alg. 1 4.63e-005 56 65 1.4853 $\texttt{IPBM}$ 3.17e-005 54 68 1.7437 $\texttt{PBM}$ 3.14e-006 63 72 2.4185 15 Alg. 1 1.16e-004 46 58 1.0673 $\texttt{IPBM}$ 2.41e-004 52 64 1.1536 $\texttt{PBM}$ 3.38e-003 57 72 1.2183 20 Alg. 1 4.62e-004 81 96 4.4328 $\texttt{IPBM}$ 3.84e-003 79 95 4.8753 $\texttt{PBM}$ 0.0198 104 131 10.7273
 $\texttt{n}$ $\texttt{Algorithm}$ $F_{_{\texttt{final}}}$ $\texttt{Nd}$ $\texttt{Ni}$ $\texttt{Time}$ 5 Alg. 1 4.22e-007 28 28 0.5833 $\texttt{IPBM}$ 3.13e-007 30 30 0.6032 $\texttt{PBM}$ 2.37e-007 31 31 0.6817 6 Alg. 1 1.88e-007 27 28 0.5024 $\texttt{IPBM}$ 2.39e-007 27 30 0.5924 $\texttt{PBM}$ 1.43e-006 27 33 1.0164 7 Alg. 1 3.82e-005 33 36 0.5437 $\texttt{IPBM}$ 4.83e-007 32 37 0.5771 $\texttt{PBM}$ 3.34e-006 32 34 1.4873 8 Alg. 1 4.16e-005 38 41 0.8272 $\texttt{IPBM}$ 4.63e-005 37 43 0.9136 $\texttt{PBM}$ 4.25e-003 43 48 5.7492 9 Alg. 1 5.18e-005 33 36 1.1306 $\texttt{IPBM}$ 6.48e-006 37 39 1.3848 $\texttt{PBM}$ 2.41e-004 37 44 2.1447 10 Alg. 1 4.96e-005 38 43 0.9137 $\texttt{IPBM}$ 4.17e-006 37 46 0.9747 $\texttt{PBM}$ 5.43e-005 44 57 4.0254 11 Alg. 1 4.37e-005 48 55 1.0873 $\texttt{IPBM}$ 2.73e-006 47 56 1.1473 $\texttt{PBM}$ 4.67e-005 62 69 1.2734 12 Alg. 1 4.37e-006 48 65 1.2063 $\texttt{IPBM}$ 5.46e-006 51 67 1.3593 $\texttt{PBM}$ 4.73e-006 74 80 1.4932 13 Alg. 1 4.58e-005 48 68 1.1283 $\texttt{IPBM}$ 3.28e-006 54 72 1.2863 $\texttt{PBM}$ 8.53e-005 82 93 2.3602 14 Alg. 1 4.63e-005 56 65 1.4853 $\texttt{IPBM}$ 3.17e-005 54 68 1.7437 $\texttt{PBM}$ 3.14e-006 63 72 2.4185 15 Alg. 1 1.16e-004 46 58 1.0673 $\texttt{IPBM}$ 2.41e-004 52 64 1.1536 $\texttt{PBM}$ 3.38e-003 57 72 1.2183 20 Alg. 1 4.62e-004 81 96 4.4328 $\texttt{IPBM}$ 3.84e-003 79 95 4.8753 $\texttt{PBM}$ 0.0198 104 131 10.7273
Comparison between Algorithm 1, $\texttt{PBM}$ and $\texttt{IPBM}$ for objective function $F_2$
 $\texttt{n}$ $\texttt{Algorithm}$ $F_{_{\texttt{final}}}$ $\texttt{Ki}$ $\texttt{Ni}$ $\texttt{Time}$ 5 Alg. 1 2.18e-004 21 26 0.2063 $\texttt{IPBM}$ 3.86e-004 19 26 0.2165 $\texttt{PBM}$ 5.74e-004 23 31 0.6639 6 Alg. 1 3.65e-003 27 33 0.3452 $\texttt{IPBM}$ 3.61e-003 24 29 0.3277 $\texttt{PBM}$ 3.46e-003 22 31 0.3532 7 Alg. 1 2.84e-003 36 41 1.5834 $\texttt{IPBM}$ 2.65e-004 43 56 1.9734 $\texttt{PBM}$ 2.14e-003 56 74 5.8017 8 Alg. 1 2.88e-004 24 33 0.4110 $\texttt{IPBM}$ 3.28e-004 29 36 0.4368 $\texttt{PBM}$ 1.29e-003 31 40 1.2455 9 Alg. 1 2.46e-004 29 53 1.1681 $\texttt{IPBM}$ 5.49e-004 32 54 1.2518 $\texttt{PBM}$ 2.74e-003 36 45 1.8386 10 Alg. 1 3.48e-003 19 34 1.0538 $\texttt{IPBM}$ 2.86e-003 21 36 1.2023 $\texttt{PBM}$ 1.55e-002 63 88 7.2973 11 Alg. 1 5.75e-003 36 57 1.3976 $\texttt{IPBM}$ 8.64e-003 37 55 1.4683 $\texttt{PBM}$ 1.17e-002 64 81 11.2496 12 Alg. 1 1.32e-002 29 54 1.3056 $\texttt{IPBM}$ 1.22e-002 29 57 1.3205 $\texttt{PBM}$ 1.24e-002 64 79 1.6228 13 Alg. 1 4.16e-003 35 71 1.5946 $\texttt{IPBM}$ 3.144-004 38 63 1.6634 $\texttt{PBM}$ 2.84e-003 67 83 2.1066 14 Alg. 1 3.53e-003 30 46 1.2391 $\texttt{IPBM}$ 3.85e-003 32 48 1.3884 $\texttt{PBM}$ 7.95e-003 58 81 2.4639 15 Alg. 1 2.43e-002 41 67 0.9864 $\texttt{IPBM}$ 1.41e-003 49 76 1.0356 $\texttt{PBM}$ 4.63e-002 66 84 3.7320 20 Alg. 1 1.53e-004 33 51 1.4601 $\texttt{IPBM}$ 3.74e-004 35 56 1.6054 $\texttt{PBM}$ 7.78e-002 87 119 5.8107
 $\texttt{n}$ $\texttt{Algorithm}$ $F_{_{\texttt{final}}}$ $\texttt{Ki}$ $\texttt{Ni}$ $\texttt{Time}$ 5 Alg. 1 2.18e-004 21 26 0.2063 $\texttt{IPBM}$ 3.86e-004 19 26 0.2165 $\texttt{PBM}$ 5.74e-004 23 31 0.6639 6 Alg. 1 3.65e-003 27 33 0.3452 $\texttt{IPBM}$ 3.61e-003 24 29 0.3277 $\texttt{PBM}$ 3.46e-003 22 31 0.3532 7 Alg. 1 2.84e-003 36 41 1.5834 $\texttt{IPBM}$ 2.65e-004 43 56 1.9734 $\texttt{PBM}$ 2.14e-003 56 74 5.8017 8 Alg. 1 2.88e-004 24 33 0.4110 $\texttt{IPBM}$ 3.28e-004 29 36 0.4368 $\texttt{PBM}$ 1.29e-003 31 40 1.2455 9 Alg. 1 2.46e-004 29 53 1.1681 $\texttt{IPBM}$ 5.49e-004 32 54 1.2518 $\texttt{PBM}$ 2.74e-003 36 45 1.8386 10 Alg. 1 3.48e-003 19 34 1.0538 $\texttt{IPBM}$ 2.86e-003 21 36 1.2023 $\texttt{PBM}$ 1.55e-002 63 88 7.2973 11 Alg. 1 5.75e-003 36 57 1.3976 $\texttt{IPBM}$ 8.64e-003 37 55 1.4683 $\texttt{PBM}$ 1.17e-002 64 81 11.2496 12 Alg. 1 1.32e-002 29 54 1.3056 $\texttt{IPBM}$ 1.22e-002 29 57 1.3205 $\texttt{PBM}$ 1.24e-002 64 79 1.6228 13 Alg. 1 4.16e-003 35 71 1.5946 $\texttt{IPBM}$ 3.144-004 38 63 1.6634 $\texttt{PBM}$ 2.84e-003 67 83 2.1066 14 Alg. 1 3.53e-003 30 46 1.2391 $\texttt{IPBM}$ 3.85e-003 32 48 1.3884 $\texttt{PBM}$ 7.95e-003 58 81 2.4639 15 Alg. 1 2.43e-002 41 67 0.9864 $\texttt{IPBM}$ 1.41e-003 49 76 1.0356 $\texttt{PBM}$ 4.63e-002 66 84 3.7320 20 Alg. 1 1.53e-004 33 51 1.4601 $\texttt{IPBM}$ 3.74e-004 35 56 1.6054 $\texttt{PBM}$ 7.78e-002 87 119 5.8107
 [1] Dan Li, Li-Ping Pang, Fang-Fang Guo, Zun-Quan Xia. An alternating linearization method with inexact data for bilevel nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2014, 10 (3) : 859-869. doi: 10.3934/jimo.2014.10.859 [2] Zhili Ge, Gang Qian, Deren Han. Global convergence of an inexact operator splitting method for monotone variational inequalities. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1013-1026. doi: 10.3934/jimo.2011.7.1013 [3] Haiyan Wang, Jinyan Fan. Convergence properties of inexact Levenberg-Marquardt method under Hölderian local error bound. Journal of Industrial & Management Optimization, 2021, 17 (4) : 2265-2275. doi: 10.3934/jimo.2020068 [4] Jie-Wen He, Chi-Chon Lei, Chen-Yang Shi, Seak-Weng Vong. An inexact alternating direction method of multipliers for a kind of nonlinear complementarity problems. Numerical Algebra, Control & Optimization, 2021, 11 (3) : 353-362. doi: 10.3934/naco.2020030 [5] Jinyan Fan, Jianyu Pan. On the convergence rate of the inexact Levenberg-Marquardt method. Journal of Industrial & Management Optimization, 2011, 7 (1) : 199-210. doi: 10.3934/jimo.2011.7.199 [6] Hao-Chun Lu. An efficient convexification method for solving generalized geometric problems. Journal of Industrial & Management Optimization, 2012, 8 (2) : 429-455. doi: 10.3934/jimo.2012.8.429 [7] Anna Cima, Armengol Gasull, Francesc Mañosas. Global linearization of periodic difference equations. Discrete & Continuous Dynamical Systems, 2012, 32 (5) : 1575-1595. doi: 10.3934/dcds.2012.32.1575 [8] Jinyan Fan, Jianyu Pan. Inexact Levenberg-Marquardt method for nonlinear equations. Discrete & Continuous Dynamical Systems - B, 2004, 4 (4) : 1223-1232. doi: 10.3934/dcdsb.2004.4.1223 [9] Yuan Shen, Xin Liu. An alternating minimization method for matrix completion problems. Discrete & Continuous Dynamical Systems - S, 2020, 13 (6) : 1757-1772. doi: 10.3934/dcdss.2020103 [10] Russell E. Warren, Stanley J. Osher. Hyperspectral unmixing by the alternating direction method of multipliers. Inverse Problems & Imaging, 2015, 9 (3) : 917-933. doi: 10.3934/ipi.2015.9.917 [11] Liyan Qi, Xiantao Xiao, Liwei Zhang. On the global convergence of a parameter-adjusting Levenberg-Marquardt method. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 25-36. doi: 10.3934/naco.2015.5.25 [12] Pedro Branco. A post-quantum UC-commitment scheme in the global random oracle model from code-based assumptions. Advances in Mathematics of Communications, 2021, 15 (1) : 113-130. doi: 10.3934/amc.2020046 [13] Hong-Yi Miao, Li Wang. Preconditioned inexact Newton-like method for large nonsymmetric eigenvalue problems. Numerical Algebra, Control & Optimization, 2021  doi: 10.3934/naco.2021012 [14] Shuang Chen, Li-Ping Pang, Dan Li. An inexact semismooth Newton method for variational inequality with symmetric cone constraints. Journal of Industrial & Management Optimization, 2015, 11 (3) : 733-746. doi: 10.3934/jimo.2015.11.733 [15] Hongxiu Zhong, Guoliang Chen, Xueping Guo. Semi-local convergence of the Newton-HSS method under the center Lipschitz condition. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 85-99. doi: 10.3934/naco.2019007 [16] Chunming Tang, Jinbao Jian, Guoyin Li. A proximal-projection partial bundle method for convex constrained minimax problems. Journal of Industrial & Management Optimization, 2019, 15 (2) : 757-774. doi: 10.3934/jimo.2018069 [17] Sohana Jahan. Supervised distance preserving projection using alternating direction method of multipliers. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1783-1799. doi: 10.3934/jimo.2019029 [18] Foxiang Liu, Lingling Xu, Yuehong Sun, Deren Han. A proximal alternating direction method for multi-block coupled convex optimization. Journal of Industrial & Management Optimization, 2019, 15 (2) : 723-737. doi: 10.3934/jimo.2018067 [19] Wei Ouyang, Li Li. Hölder strong metric subregularity and its applications to convergence analysis of inexact Newton methods. Journal of Industrial & Management Optimization, 2021, 17 (1) : 169-184. doi: 10.3934/jimo.2019105 [20] Nora Merabet. Global convergence of a memory gradient method with closed-form step size formula. Conference Publications, 2007, 2007 (Special) : 721-730. doi: 10.3934/proc.2007.2007.721

2020 Impact Factor: 1.801