January  2017, 13(1): 93-112. doi: 10.3934/jimo.2016006

A smoothing iterative method for quantile regression with nonconvex $ \ell_p $ penalty

1. 

Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, China

2. 

No.1 High School of Huojia, Xinxiang, Henan, 453800, China

3. 

Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, China

4. 

School of Insurance and Economics, University of International Business and Economics, Beijing, 100029, China

5. 

School of Mathematics, University of Southampton, Southampton SO17 1BJ, United Kingdom

*Corresponding author: Lingchen Kong

Received  June 2014 Revised  December 2015 Published  March 2016

Fund Project: The first author is supported by National Natural Science Foundation of China (11431002, 11171018).

The high-dimensional linear regression model has attracted much attention in areas like information technology, biology, chemometrics, economics, finance and other scientific fields. In this paper, we use smoothing techniques to deal with high-dimensional sparse models via quantile regression with the nonconvex $ \ell_p $ penalty ($ 0<p<1 $). We introduce two kinds of smoothing functions and give the estimation of approximation by our different smoothing functions. By smoothing the quantile function, we derive two types of lower bounds for any local solution of the smoothing quantile regression with the nonconvex $ \ell_p $ penalty. Then with the help of $ \ell_1 $ regularization, we propose a smoothing iterative method for the smoothing quantile regression with the weighted $ \ell_1 $ penalty and establish its global convergence, whose efficient performance is illustrated by the numerical experiments.

Citation: Lianjun Zhang, Lingchen Kong, Yan Li, Shenglong Zhou. A smoothing iterative method for quantile regression with nonconvex $ \ell_p $ penalty. Journal of Industrial & Management Optimization, 2017, 13 (1) : 93-112. doi: 10.3934/jimo.2016006
References:
[1]

A. Y. Aravkin, A. Kambadur, A. C. Lozano and R. Luss, Sparse quantile huber regression for efficient and robust estimation, preprint, arXiv: 1402.4624. Google Scholar

[2]

P. Bühlmann and S. V. D. Geer, Statistics for High-Dimensional Data: Methods, Theory and Applications, Springer Science & Business Media, 2011. doi: 10.1007/978-3-642-20192-9.  Google Scholar

[3]

E. J. Cand$ \grave{e} $sM. B. Wakin and S. P. Boyd, Enhancing sparsity by reweighted $\ell_1$ minimization, Journal of Fourier Analysis and Applications, 14 (2008), 877-905.  doi: 10.1007/s00041-008-9045-x.  Google Scholar

[4]

T. T. CaiL. Wang and G. W. Xu, Shifting inequality and recovery of sparse signals, IEEE Trans. Signal Process, 58 (2010), 1300-1308.  doi: 10.1109/TSP.2009.2034936.  Google Scholar

[5]

X. J. ChenF. M. Xu and Y. Y. Ye, Lower bound theory of nonzero entries in solutions of $\ell_2$-$\ell_p$ minimization, SIAM J. Sci. Comput., 32 (2010), 2832-2852.  doi: 10.1137/090761471.  Google Scholar

[6]

X. J. Chen and W. J. Zhou, Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization, SIAM J. Imaging Sci., 3 (2010), 765-790.  doi: 10.1137/080740167.  Google Scholar

[7]

T. T. Cai and A. Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Applied and Computational Harmonic Analysis, 35 (2013), 74-93.  doi: 10.1016/j.acha.2012.07.010.  Google Scholar

[8]

D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via $\ell_1$ minimization, PNAS, 100 (2003), 2197-2202.  doi: 10.1073/pnas.0437847100.  Google Scholar

[9]

I. DaubechiesR. DeVoreM. Fornasier and C. S. G$ü$nt$ü$rk, Iteratively reweighted least squares minimization for sparse recovery, Communications on Pure and Applied Mathematics, 63 (2010), 1-38.  doi: 10.1002/cpa.20303.  Google Scholar

[10]

F. Francisco and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag, New York, 2007. Google Scholar

[11]

J. Q. FanF. Han and H. Liu, Challenges of big data analysis, National Science Review, 1 (2014), 239-314.  doi: 10.1093/nsr/nwt032.  Google Scholar

[12]

J. Q. FanY. Y. Fan and E. Barut, Adaptive robust variable selection, Annals of Statistics, 42 (2014), 324-351.  doi: 10.1214/13-AOS1191.  Google Scholar

[13]

R. Koenker and G. Bassett, Regression quantiles, Econometrica, 46 (1978), 33-50.  doi: 10.2307/1913643.  Google Scholar

[14]

R. Koenker, Quantile Regression, Cambridge University Press, 2005. doi: 10.1017/CBO9780511754098.  Google Scholar

[15]

M. J. Lai and J. Wang, An unconstrained $ l_q $ minimization with $ 0 < q ≤ 1 $ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101.  doi: 10.1137/090775397.  Google Scholar

[16]

L. C. KongJ. Sun and N. H. Xiu, A regularized smoothing newton method for symmetric cone complimentarity problem, SIAM J. Optim., 19 (2008), 1028-1047.  doi: 10.1137/060676775.  Google Scholar

[17]

Z. S. Lu, Iterative reweighted minimization methods for $\ell_p$ regularized unconstrained nonlinear programming, Mathematical Programming, 147 (2014), 227-307.  doi: 10.1007/s10107-013-0722-4.  Google Scholar

[18]

L. WangY. Wu and R. Li, Quantile regression for analyzing heterogeneity in ultra high-demension, Journal of the American Statistical Association, 107 (2012), 214-222.  doi: 10.1080/01621459.2012.656014.  Google Scholar

[19]

L. Wang, The $L_1$ penalized LAD estimator for high dimensional linear regression, Journal of Multivariate Analysis, 120 (2013), 135-151.  doi: 10.1016/j.jmva.2013.04.001.  Google Scholar

[20]

Y. B. Zhao and D. Li, Reweighted $\ell_1$-minimization for sparse solutions to underdetermined linear systems, SIAM J. Optim., 22 (2012), 1065-1088.  doi: 10.1137/110847445.  Google Scholar

[21]

S. L. Zhou, N. H. Xiu, Y. N. Wang and L. C. Kong, Exact recovery for sparse signal via weighted $\ell_1$ minimization, preprint, arXiv: 1312.2358. Google Scholar

[22]

H. Zou, The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101 (2006), 1418-1429.  doi: 10.1198/016214506000000735.  Google Scholar

show all references

References:
[1]

A. Y. Aravkin, A. Kambadur, A. C. Lozano and R. Luss, Sparse quantile huber regression for efficient and robust estimation, preprint, arXiv: 1402.4624. Google Scholar

[2]

P. Bühlmann and S. V. D. Geer, Statistics for High-Dimensional Data: Methods, Theory and Applications, Springer Science & Business Media, 2011. doi: 10.1007/978-3-642-20192-9.  Google Scholar

[3]

E. J. Cand$ \grave{e} $sM. B. Wakin and S. P. Boyd, Enhancing sparsity by reweighted $\ell_1$ minimization, Journal of Fourier Analysis and Applications, 14 (2008), 877-905.  doi: 10.1007/s00041-008-9045-x.  Google Scholar

[4]

T. T. CaiL. Wang and G. W. Xu, Shifting inequality and recovery of sparse signals, IEEE Trans. Signal Process, 58 (2010), 1300-1308.  doi: 10.1109/TSP.2009.2034936.  Google Scholar

[5]

X. J. ChenF. M. Xu and Y. Y. Ye, Lower bound theory of nonzero entries in solutions of $\ell_2$-$\ell_p$ minimization, SIAM J. Sci. Comput., 32 (2010), 2832-2852.  doi: 10.1137/090761471.  Google Scholar

[6]

X. J. Chen and W. J. Zhou, Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization, SIAM J. Imaging Sci., 3 (2010), 765-790.  doi: 10.1137/080740167.  Google Scholar

[7]

T. T. Cai and A. Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Applied and Computational Harmonic Analysis, 35 (2013), 74-93.  doi: 10.1016/j.acha.2012.07.010.  Google Scholar

[8]

D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via $\ell_1$ minimization, PNAS, 100 (2003), 2197-2202.  doi: 10.1073/pnas.0437847100.  Google Scholar

[9]

I. DaubechiesR. DeVoreM. Fornasier and C. S. G$ü$nt$ü$rk, Iteratively reweighted least squares minimization for sparse recovery, Communications on Pure and Applied Mathematics, 63 (2010), 1-38.  doi: 10.1002/cpa.20303.  Google Scholar

[10]

F. Francisco and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag, New York, 2007. Google Scholar

[11]

J. Q. FanF. Han and H. Liu, Challenges of big data analysis, National Science Review, 1 (2014), 239-314.  doi: 10.1093/nsr/nwt032.  Google Scholar

[12]

J. Q. FanY. Y. Fan and E. Barut, Adaptive robust variable selection, Annals of Statistics, 42 (2014), 324-351.  doi: 10.1214/13-AOS1191.  Google Scholar

[13]

R. Koenker and G. Bassett, Regression quantiles, Econometrica, 46 (1978), 33-50.  doi: 10.2307/1913643.  Google Scholar

[14]

R. Koenker, Quantile Regression, Cambridge University Press, 2005. doi: 10.1017/CBO9780511754098.  Google Scholar

[15]

M. J. Lai and J. Wang, An unconstrained $ l_q $ minimization with $ 0 < q ≤ 1 $ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101.  doi: 10.1137/090775397.  Google Scholar

[16]

L. C. KongJ. Sun and N. H. Xiu, A regularized smoothing newton method for symmetric cone complimentarity problem, SIAM J. Optim., 19 (2008), 1028-1047.  doi: 10.1137/060676775.  Google Scholar

[17]

Z. S. Lu, Iterative reweighted minimization methods for $\ell_p$ regularized unconstrained nonlinear programming, Mathematical Programming, 147 (2014), 227-307.  doi: 10.1007/s10107-013-0722-4.  Google Scholar

[18]

L. WangY. Wu and R. Li, Quantile regression for analyzing heterogeneity in ultra high-demension, Journal of the American Statistical Association, 107 (2012), 214-222.  doi: 10.1080/01621459.2012.656014.  Google Scholar

[19]

L. Wang, The $L_1$ penalized LAD estimator for high dimensional linear regression, Journal of Multivariate Analysis, 120 (2013), 135-151.  doi: 10.1016/j.jmva.2013.04.001.  Google Scholar

[20]

Y. B. Zhao and D. Li, Reweighted $\ell_1$-minimization for sparse solutions to underdetermined linear systems, SIAM J. Optim., 22 (2012), 1065-1088.  doi: 10.1137/110847445.  Google Scholar

[21]

S. L. Zhou, N. H. Xiu, Y. N. Wang and L. C. Kong, Exact recovery for sparse signal via weighted $\ell_1$ minimization, preprint, arXiv: 1312.2358. Google Scholar

[22]

H. Zou, The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101 (2006), 1418-1429.  doi: 10.1198/016214506000000735.  Google Scholar

Figure 1.  Function ρτ(θ) and its smoothing relaxation ρτ, 1(θ, δ)
Figure 2.  Function $\varrho_\tau(\theta, \delta)$ and its smoothing relaxation $\rho_{\tau, 2}(\theta, \delta)$
Figure 3.  Errors for different τ, n, and noise in Example 1
Figure 4.  Errors for different τ, n, and noise in Example 2
Table 1.  The framework of MIRL1
Modified iterative reweighted $ \ell_1 $ minimization (MIRL1)
Initialize $\tau, \gamma\in(0, 1), \delta^1>0, \epsilon>0, \beta^{0}, w^{1}, M, \mu^{1}$;
For $t=1: M$
     Initialize $\beta^{t, 1}=\beta^{t-1}$;
     While $\|\beta^{t, k+1}-\beta^{t, k}\|_2\geq\mu^t \max\left\{1, \|\beta^{t, k}\|_2\right\}$
          Compute $L_{t, k}\geq \frac{1}{2\delta^t }\min\Big\{\lambda_{\max}\Big(\sum_{\Omega\left(\beta^{t, k}, \delta^\top \right)}x_{i}x_{i}^\top \Big), \lambda_{\max}(XX^\top )\Big\}$;
          Compute $\widetilde{\beta}^{t, k}=\beta^{t, k}-\nabla S_\tau(\beta^{t, k}, \delta^t )/(L_{t, k}+\epsilon)$;
          Compute $\beta^{t, k+1}=\text{sign}\left(\widetilde{\beta}^{t, k}\right)\circ\max\left\{|\widetilde{\beta}^{t, k}|-\frac{\mu^t }{L_{t, k}+\epsilon}w^t, 0\right\}$.
     End
     Update $\delta^{t+1}=\gamma\delta^t $ and $\beta^t =\beta^{t, k+1}$;
     Update $w^{t+1}$ from $\beta^{t-1}$ and $\beta^t $ based on (46), (47) and (48);
End
Modified iterative reweighted $ \ell_1 $ minimization (MIRL1)
Initialize $\tau, \gamma\in(0, 1), \delta^1>0, \epsilon>0, \beta^{0}, w^{1}, M, \mu^{1}$;
For $t=1: M$
     Initialize $\beta^{t, 1}=\beta^{t-1}$;
     While $\|\beta^{t, k+1}-\beta^{t, k}\|_2\geq\mu^t \max\left\{1, \|\beta^{t, k}\|_2\right\}$
          Compute $L_{t, k}\geq \frac{1}{2\delta^t }\min\Big\{\lambda_{\max}\Big(\sum_{\Omega\left(\beta^{t, k}, \delta^\top \right)}x_{i}x_{i}^\top \Big), \lambda_{\max}(XX^\top )\Big\}$;
          Compute $\widetilde{\beta}^{t, k}=\beta^{t, k}-\nabla S_\tau(\beta^{t, k}, \delta^t )/(L_{t, k}+\epsilon)$;
          Compute $\beta^{t, k+1}=\text{sign}\left(\widetilde{\beta}^{t, k}\right)\circ\max\left\{|\widetilde{\beta}^{t, k}|-\frac{\mu^t }{L_{t, k}+\epsilon}w^t, 0\right\}$.
     End
     Update $\delta^{t+1}=\gamma\delta^t $ and $\beta^t =\beta^{t, k+1}$;
     Update $w^{t+1}$ from $\beta^{t-1}$ and $\beta^t $ based on (46), (47) and (48);
End
Table 2.  $n=512, m=128$
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$ 0.0063 0.0078 0 1.0000 2.7474
$LN(0, \sigma^2)$ 0.0146 0.017700.94293.0230
$0.3$$N(0, \sigma^2)$0.00670.009101.00002.4950
$LN(0, \sigma^2)$0.01670.020900.87622.2424
$0.5$$N(0, \sigma^2)$0.00820.009701.00002.2793
$LN(0, \sigma^2)$0.01750.020100.85632.1709
$0.7$$N(0, \sigma^2)$0.00690.009301.00002.4489
$LN(0, \sigma^2)$0.02080.025800.80282.4478
$0.9$$N(0, \sigma^2)$0.00620.007701.00002.6616
$LN(0, \sigma^2)$0.02430.029900.62542.7624
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$ 0.0063 0.0078 0 1.0000 2.7474
$LN(0, \sigma^2)$ 0.0146 0.017700.94293.0230
$0.3$$N(0, \sigma^2)$0.00670.009101.00002.4950
$LN(0, \sigma^2)$0.01670.020900.87622.2424
$0.5$$N(0, \sigma^2)$0.00820.009701.00002.2793
$LN(0, \sigma^2)$0.01750.020100.85632.1709
$0.7$$N(0, \sigma^2)$0.00690.009301.00002.4489
$LN(0, \sigma^2)$0.02080.025800.80282.4478
$0.9$$N(0, \sigma^2)$0.00620.007701.00002.6616
$LN(0, \sigma^2)$0.02430.029900.62542.7624
Table 3.  $n=1024, m=256$
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00690.009101.000013.3403
$LN(0, \sigma^2)$0.01940.025700.981815.4523
$0.3$$N(0, \sigma^2)$0.00700.009301.000010.1645
$LN(0, \sigma^2)$0.01930.024401.000011.2862
$0.5$$N(0, \sigma^2)$0.00760.009601.000011.4844
$LN(0, \sigma^2)$0.02010.025201.000011.0627
$0.7$$N(0, \sigma^2)$0.00740.009701.000012.2611
$LN(0, \sigma^2)$0.02060.026400.981812.3306
$0.9$$N(0, \sigma^2)$0.00700.009301.000013.6169
$LN(0, \sigma^2)$0.02260.028600.953814.1217
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00690.009101.000013.3403
$LN(0, \sigma^2)$0.01940.025700.981815.4523
$0.3$$N(0, \sigma^2)$0.00700.009301.000010.1645
$LN(0, \sigma^2)$0.01930.024401.000011.2862
$0.5$$N(0, \sigma^2)$0.00760.009601.000011.4844
$LN(0, \sigma^2)$0.02010.025201.000011.0627
$0.7$$N(0, \sigma^2)$0.00740.009701.000012.2611
$LN(0, \sigma^2)$0.02060.026400.981812.3306
$0.9$$N(0, \sigma^2)$0.00700.009301.000013.6169
$LN(0, \sigma^2)$0.02260.028600.953814.1217
Table 4.  $n=512, m=128$
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00710.009601.00003.3895
$LN(0, \sigma^2)$0.01850.024201.00002.5908
$0.3$$N(0, \sigma^2)$0.00710.009801.00002.6341
$LN(0, \sigma^2)$0.01890.024601.00002.1094
$0.5$$N(0, \sigma^2)$0.00710.009401.00001.7425
$LN(0, \sigma^2)$0.01840.022401.00001.3525
$0.7$$N(0, \sigma^2)$0.00700.009501.00001.6148
$LN(0, \sigma^2)$0.01960.024900.96671.3640
$0.9$$N(0, \sigma^2)$0.00720.009501.00002.3751
$LN(0, \sigma^2)$0.01860.022701.00002.8742
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00710.009601.00003.3895
$LN(0, \sigma^2)$0.01850.024201.00002.5908
$0.3$$N(0, \sigma^2)$0.00710.009801.00002.6341
$LN(0, \sigma^2)$0.01890.024601.00002.1094
$0.5$$N(0, \sigma^2)$0.00710.009401.00001.7425
$LN(0, \sigma^2)$0.01840.022401.00001.3525
$0.7$$N(0, \sigma^2)$0.00700.009501.00001.6148
$LN(0, \sigma^2)$0.01960.024900.96671.3640
$0.9$$N(0, \sigma^2)$0.00720.009501.00002.3751
$LN(0, \sigma^2)$0.01860.022701.00002.8742
Table 5.  $n=1024, m=256$
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00710.009601.000017.5535
$LN(0, \sigma^2)$0.01860.024601.000013.5658
$0.3$$N(0, \sigma^2)$0.00740.009301.000013.3480
$LN(0, \sigma^2)$0.01960.026101.00008.8331
$0.5$$N(0, \sigma^2)$0.00760.009401.00009.8347
$LN(0, \sigma^2)$0.01830.024101.00007.6007
$0.7$$N(0, \sigma^2)$0.00690.009301.000013.6823
$LN(0, \sigma^2)$0.02000.027201.00007.9791
$0.9$$N(0, \sigma^2)$0.00720.009401.000018.5440
$LN(0, \sigma^2)$0.01840.024901.000014.3975
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00710.009601.000017.5535
$LN(0, \sigma^2)$0.01860.024601.000013.5658
$0.3$$N(0, \sigma^2)$0.00740.009301.000013.3480
$LN(0, \sigma^2)$0.01960.026101.00008.8331
$0.5$$N(0, \sigma^2)$0.00760.009401.00009.8347
$LN(0, \sigma^2)$0.01830.024101.00007.6007
$0.7$$N(0, \sigma^2)$0.00690.009301.000013.6823
$LN(0, \sigma^2)$0.02000.027201.00007.9791
$0.9$$N(0, \sigma^2)$0.00720.009401.000018.5440
$LN(0, \sigma^2)$0.01840.024901.000014.3975
[1]

Huining Qiu, Xiaoming Chen, Wanquan Liu, Guanglu Zhou, Yiju Wang, Jianhuang Lai. A fast $\ell_1$-solver and its applications to robust face recognition. Journal of Industrial & Management Optimization, 2012, 8 (1) : 163-178. doi: 10.3934/jimo.2012.8.163

[2]

Qia Li, Na Zhang. Capped $\ell_p$ approximations for the composite $\ell_0$ regularization problem. Inverse Problems & Imaging, 2018, 12 (5) : 1219-1243. doi: 10.3934/ipi.2018051

[3]

Hui Huang, Eldad Haber, Lior Horesh. Optimal estimation of $\ell_1$-regularization prior from a regularized empirical Bayesian risk standpoint. Inverse Problems & Imaging, 2012, 6 (3) : 447-464. doi: 10.3934/ipi.2012.6.447

[4]

Z.Y. Wu, H.W.J. Lee, F.S. Bai, L.S. Zhang. Quadratic smoothing approximation to $l_1$ exact penalty function in global optimization. Journal of Industrial & Management Optimization, 2005, 1 (4) : 533-547. doi: 10.3934/jimo.2005.1.533

[5]

Takashi Hara and Gordon Slade. The incipient infinite cluster in high-dimensional percolation. Electronic Research Announcements, 1998, 4: 48-55.

[6]

Regina S. Burachik, C. Yalçın Kaya. An update rule and a convergence result for a penalty function method. Journal of Industrial & Management Optimization, 2007, 3 (2) : 381-398. doi: 10.3934/jimo.2007.3.381

[7]

Yanqing Liu, Jiyuan Tao, Huan Zhang, Xianchao Xiu, Lingchen Kong. Fused LASSO penalized least absolute deviation estimator for high dimensional linear regression. Numerical Algebra, Control & Optimization, 2018, 8 (1) : 97-117. doi: 10.3934/naco.2018006

[8]

Jun Wang, Xing Tao Wang. Sparse signal reconstruction via the approximations of $ \ell_{0} $ quasinorm. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-16. doi: 10.3934/jimo.2019035

[9]

Ahmet Sahiner, Gulden Kapusuz, Nurullah Yilmaz. A new smoothing approach to exact penalty functions for inequality constrained optimization problems. Numerical Algebra, Control & Optimization, 2016, 6 (2) : 161-173. doi: 10.3934/naco.2016006

[10]

Yibing Lv, Zhongping Wan. Linear bilevel multiobjective optimization problem: Penalty approach. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1213-1223. doi: 10.3934/jimo.2018092

[11]

Wayne B. Hayes, Kenneth R. Jackson, Carmen Young. Rigorous high-dimensional shadowing using containment: The general case. Discrete & Continuous Dynamical Systems - A, 2006, 14 (2) : 329-342. doi: 10.3934/dcds.2006.14.329

[12]

Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6099-6121. doi: 10.3934/dcds.2017262

[13]

Cheng Ma, Xun Li, Ka-Fai Cedric Yiu, Yongjian Yang, Liansheng Zhang. On an exact penalty function method for semi-infinite programming problems. Journal of Industrial & Management Optimization, 2012, 8 (3) : 705-726. doi: 10.3934/jimo.2012.8.705

[14]

Zhiqing Meng, Qiying Hu, Chuangyin Dang. A penalty function algorithm with objective parameters for nonlinear mathematical programming. Journal of Industrial & Management Optimization, 2009, 5 (3) : 585-601. doi: 10.3934/jimo.2009.5.585

[15]

Changjun Yu, Kok Lay Teo, Liansheng Zhang, Yanqin Bai. A new exact penalty function method for continuous inequality constrained optimization problems. Journal of Industrial & Management Optimization, 2010, 6 (4) : 895-910. doi: 10.3934/jimo.2010.6.895

[16]

Shiyun Wang, Yong-Jin Liu, Yong Jiang. A majorized penalty approach to inverse linear second order cone programming problems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 965-976. doi: 10.3934/jimo.2014.10.965

[17]

Markus Grasmair. Well-posedness and convergence rates for sparse regularization with sublinear $l^q$ penalty term. Inverse Problems & Imaging, 2009, 3 (3) : 383-387. doi: 10.3934/ipi.2009.3.383

[18]

Yunhai Xiao, Soon-Yi Wu, Bing-Sheng He. A proximal alternating direction method for $\ell_{2,1}$-norm least squares problem in multi-task feature learning. Journal of Industrial & Management Optimization, 2012, 8 (4) : 1057-1069. doi: 10.3934/jimo.2012.8.1057

[19]

Shaoyong Lai, Qichang Xie. A selection problem for a constrained linear regression model. Journal of Industrial & Management Optimization, 2008, 4 (4) : 757-766. doi: 10.3934/jimo.2008.4.757

[20]

Yaxian Xu, Ajay Jasra. Particle filters for inference of high-dimensional multivariate stochastic volatility models with cross-leverage effects. Foundations of Data Science, 2019, 1 (1) : 61-85. doi: 10.3934/fods.2019003

2018 Impact Factor: 1.025

Metrics

  • PDF downloads (26)
  • HTML views (348)
  • Cited by (1)

[Back to Top]