January  2017, 13(1): 93-112. doi: 10.3934/jimo.2016006

A smoothing iterative method for quantile regression with nonconvex $ \ell_p $ penalty

1. 

Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, China

2. 

No.1 High School of Huojia, Xinxiang, Henan, 453800, China

3. 

Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, China

4. 

School of Insurance and Economics, University of International Business and Economics, Beijing, 100029, China

5. 

School of Mathematics, University of Southampton, Southampton SO17 1BJ, United Kingdom

*Corresponding author: Lingchen Kong

Received  June 2014 Revised  December 2015 Published  March 2016

Fund Project: The first author is supported by National Natural Science Foundation of China (11431002, 11171018).

The high-dimensional linear regression model has attracted much attention in areas like information technology, biology, chemometrics, economics, finance and other scientific fields. In this paper, we use smoothing techniques to deal with high-dimensional sparse models via quantile regression with the nonconvex $ \ell_p $ penalty ($ 0<p<1 $). We introduce two kinds of smoothing functions and give the estimation of approximation by our different smoothing functions. By smoothing the quantile function, we derive two types of lower bounds for any local solution of the smoothing quantile regression with the nonconvex $ \ell_p $ penalty. Then with the help of $ \ell_1 $ regularization, we propose a smoothing iterative method for the smoothing quantile regression with the weighted $ \ell_1 $ penalty and establish its global convergence, whose efficient performance is illustrated by the numerical experiments.

Citation: Lianjun Zhang, Lingchen Kong, Yan Li, Shenglong Zhou. A smoothing iterative method for quantile regression with nonconvex $ \ell_p $ penalty. Journal of Industrial and Management Optimization, 2017, 13 (1) : 93-112. doi: 10.3934/jimo.2016006
References:
[1]

A. Y. Aravkin, A. Kambadur, A. C. Lozano and R. Luss, Sparse quantile huber regression for efficient and robust estimation, preprint, arXiv: 1402.4624.

[2]

P. Bühlmann and S. V. D. Geer, Statistics for High-Dimensional Data: Methods, Theory and Applications, Springer Science & Business Media, 2011. doi: 10.1007/978-3-642-20192-9.

[3]

E. J. Cand$ \grave{e} $sM. B. Wakin and S. P. Boyd, Enhancing sparsity by reweighted $\ell_1$ minimization, Journal of Fourier Analysis and Applications, 14 (2008), 877-905.  doi: 10.1007/s00041-008-9045-x.

[4]

T. T. CaiL. Wang and G. W. Xu, Shifting inequality and recovery of sparse signals, IEEE Trans. Signal Process, 58 (2010), 1300-1308.  doi: 10.1109/TSP.2009.2034936.

[5]

X. J. ChenF. M. Xu and Y. Y. Ye, Lower bound theory of nonzero entries in solutions of $\ell_2$-$\ell_p$ minimization, SIAM J. Sci. Comput., 32 (2010), 2832-2852.  doi: 10.1137/090761471.

[6]

X. J. Chen and W. J. Zhou, Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization, SIAM J. Imaging Sci., 3 (2010), 765-790.  doi: 10.1137/080740167.

[7]

T. T. Cai and A. Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Applied and Computational Harmonic Analysis, 35 (2013), 74-93.  doi: 10.1016/j.acha.2012.07.010.

[8]

D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via $\ell_1$ minimization, PNAS, 100 (2003), 2197-2202.  doi: 10.1073/pnas.0437847100.

[9]

I. DaubechiesR. DeVoreM. Fornasier and C. S. G$ü$nt$ü$rk, Iteratively reweighted least squares minimization for sparse recovery, Communications on Pure and Applied Mathematics, 63 (2010), 1-38.  doi: 10.1002/cpa.20303.

[10]

F. Francisco and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag, New York, 2007.

[11]

J. Q. FanF. Han and H. Liu, Challenges of big data analysis, National Science Review, 1 (2014), 239-314.  doi: 10.1093/nsr/nwt032.

[12]

J. Q. FanY. Y. Fan and E. Barut, Adaptive robust variable selection, Annals of Statistics, 42 (2014), 324-351.  doi: 10.1214/13-AOS1191.

[13]

R. Koenker and G. Bassett, Regression quantiles, Econometrica, 46 (1978), 33-50.  doi: 10.2307/1913643.

[14]

R. Koenker, Quantile Regression, Cambridge University Press, 2005. doi: 10.1017/CBO9780511754098.

[15]

M. J. Lai and J. Wang, An unconstrained $ l_q $ minimization with $ 0 < q ≤ 1 $ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101.  doi: 10.1137/090775397.

[16]

L. C. KongJ. Sun and N. H. Xiu, A regularized smoothing newton method for symmetric cone complimentarity problem, SIAM J. Optim., 19 (2008), 1028-1047.  doi: 10.1137/060676775.

[17]

Z. S. Lu, Iterative reweighted minimization methods for $\ell_p$ regularized unconstrained nonlinear programming, Mathematical Programming, 147 (2014), 227-307.  doi: 10.1007/s10107-013-0722-4.

[18]

L. WangY. Wu and R. Li, Quantile regression for analyzing heterogeneity in ultra high-demension, Journal of the American Statistical Association, 107 (2012), 214-222.  doi: 10.1080/01621459.2012.656014.

[19]

L. Wang, The $L_1$ penalized LAD estimator for high dimensional linear regression, Journal of Multivariate Analysis, 120 (2013), 135-151.  doi: 10.1016/j.jmva.2013.04.001.

[20]

Y. B. Zhao and D. Li, Reweighted $\ell_1$-minimization for sparse solutions to underdetermined linear systems, SIAM J. Optim., 22 (2012), 1065-1088.  doi: 10.1137/110847445.

[21]

S. L. Zhou, N. H. Xiu, Y. N. Wang and L. C. Kong, Exact recovery for sparse signal via weighted $\ell_1$ minimization, preprint, arXiv: 1312.2358.

[22]

H. Zou, The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101 (2006), 1418-1429.  doi: 10.1198/016214506000000735.

show all references

References:
[1]

A. Y. Aravkin, A. Kambadur, A. C. Lozano and R. Luss, Sparse quantile huber regression for efficient and robust estimation, preprint, arXiv: 1402.4624.

[2]

P. Bühlmann and S. V. D. Geer, Statistics for High-Dimensional Data: Methods, Theory and Applications, Springer Science & Business Media, 2011. doi: 10.1007/978-3-642-20192-9.

[3]

E. J. Cand$ \grave{e} $sM. B. Wakin and S. P. Boyd, Enhancing sparsity by reweighted $\ell_1$ minimization, Journal of Fourier Analysis and Applications, 14 (2008), 877-905.  doi: 10.1007/s00041-008-9045-x.

[4]

T. T. CaiL. Wang and G. W. Xu, Shifting inequality and recovery of sparse signals, IEEE Trans. Signal Process, 58 (2010), 1300-1308.  doi: 10.1109/TSP.2009.2034936.

[5]

X. J. ChenF. M. Xu and Y. Y. Ye, Lower bound theory of nonzero entries in solutions of $\ell_2$-$\ell_p$ minimization, SIAM J. Sci. Comput., 32 (2010), 2832-2852.  doi: 10.1137/090761471.

[6]

X. J. Chen and W. J. Zhou, Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization, SIAM J. Imaging Sci., 3 (2010), 765-790.  doi: 10.1137/080740167.

[7]

T. T. Cai and A. Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Applied and Computational Harmonic Analysis, 35 (2013), 74-93.  doi: 10.1016/j.acha.2012.07.010.

[8]

D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via $\ell_1$ minimization, PNAS, 100 (2003), 2197-2202.  doi: 10.1073/pnas.0437847100.

[9]

I. DaubechiesR. DeVoreM. Fornasier and C. S. G$ü$nt$ü$rk, Iteratively reweighted least squares minimization for sparse recovery, Communications on Pure and Applied Mathematics, 63 (2010), 1-38.  doi: 10.1002/cpa.20303.

[10]

F. Francisco and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag, New York, 2007.

[11]

J. Q. FanF. Han and H. Liu, Challenges of big data analysis, National Science Review, 1 (2014), 239-314.  doi: 10.1093/nsr/nwt032.

[12]

J. Q. FanY. Y. Fan and E. Barut, Adaptive robust variable selection, Annals of Statistics, 42 (2014), 324-351.  doi: 10.1214/13-AOS1191.

[13]

R. Koenker and G. Bassett, Regression quantiles, Econometrica, 46 (1978), 33-50.  doi: 10.2307/1913643.

[14]

R. Koenker, Quantile Regression, Cambridge University Press, 2005. doi: 10.1017/CBO9780511754098.

[15]

M. J. Lai and J. Wang, An unconstrained $ l_q $ minimization with $ 0 < q ≤ 1 $ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101.  doi: 10.1137/090775397.

[16]

L. C. KongJ. Sun and N. H. Xiu, A regularized smoothing newton method for symmetric cone complimentarity problem, SIAM J. Optim., 19 (2008), 1028-1047.  doi: 10.1137/060676775.

[17]

Z. S. Lu, Iterative reweighted minimization methods for $\ell_p$ regularized unconstrained nonlinear programming, Mathematical Programming, 147 (2014), 227-307.  doi: 10.1007/s10107-013-0722-4.

[18]

L. WangY. Wu and R. Li, Quantile regression for analyzing heterogeneity in ultra high-demension, Journal of the American Statistical Association, 107 (2012), 214-222.  doi: 10.1080/01621459.2012.656014.

[19]

L. Wang, The $L_1$ penalized LAD estimator for high dimensional linear regression, Journal of Multivariate Analysis, 120 (2013), 135-151.  doi: 10.1016/j.jmva.2013.04.001.

[20]

Y. B. Zhao and D. Li, Reweighted $\ell_1$-minimization for sparse solutions to underdetermined linear systems, SIAM J. Optim., 22 (2012), 1065-1088.  doi: 10.1137/110847445.

[21]

S. L. Zhou, N. H. Xiu, Y. N. Wang and L. C. Kong, Exact recovery for sparse signal via weighted $\ell_1$ minimization, preprint, arXiv: 1312.2358.

[22]

H. Zou, The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101 (2006), 1418-1429.  doi: 10.1198/016214506000000735.

Figure 1.  Function ρτ(θ) and its smoothing relaxation ρτ, 1(θ, δ)
Figure 2.  Function $\varrho_\tau(\theta, \delta)$ and its smoothing relaxation $\rho_{\tau, 2}(\theta, \delta)$
Figure 3.  Errors for different τ, n, and noise in Example 1
Figure 4.  Errors for different τ, n, and noise in Example 2
Table 1.  The framework of MIRL1
Modified iterative reweighted $ \ell_1 $ minimization (MIRL1)
Initialize $\tau, \gamma\in(0, 1), \delta^1>0, \epsilon>0, \beta^{0}, w^{1}, M, \mu^{1}$;
For $t=1: M$
     Initialize $\beta^{t, 1}=\beta^{t-1}$;
     While $\|\beta^{t, k+1}-\beta^{t, k}\|_2\geq\mu^t \max\left\{1, \|\beta^{t, k}\|_2\right\}$
          Compute $L_{t, k}\geq \frac{1}{2\delta^t }\min\Big\{\lambda_{\max}\Big(\sum_{\Omega\left(\beta^{t, k}, \delta^\top \right)}x_{i}x_{i}^\top \Big), \lambda_{\max}(XX^\top )\Big\}$;
          Compute $\widetilde{\beta}^{t, k}=\beta^{t, k}-\nabla S_\tau(\beta^{t, k}, \delta^t )/(L_{t, k}+\epsilon)$;
          Compute $\beta^{t, k+1}=\text{sign}\left(\widetilde{\beta}^{t, k}\right)\circ\max\left\{|\widetilde{\beta}^{t, k}|-\frac{\mu^t }{L_{t, k}+\epsilon}w^t, 0\right\}$.
     End
     Update $\delta^{t+1}=\gamma\delta^t $ and $\beta^t =\beta^{t, k+1}$;
     Update $w^{t+1}$ from $\beta^{t-1}$ and $\beta^t $ based on (46), (47) and (48);
End
Modified iterative reweighted $ \ell_1 $ minimization (MIRL1)
Initialize $\tau, \gamma\in(0, 1), \delta^1>0, \epsilon>0, \beta^{0}, w^{1}, M, \mu^{1}$;
For $t=1: M$
     Initialize $\beta^{t, 1}=\beta^{t-1}$;
     While $\|\beta^{t, k+1}-\beta^{t, k}\|_2\geq\mu^t \max\left\{1, \|\beta^{t, k}\|_2\right\}$
          Compute $L_{t, k}\geq \frac{1}{2\delta^t }\min\Big\{\lambda_{\max}\Big(\sum_{\Omega\left(\beta^{t, k}, \delta^\top \right)}x_{i}x_{i}^\top \Big), \lambda_{\max}(XX^\top )\Big\}$;
          Compute $\widetilde{\beta}^{t, k}=\beta^{t, k}-\nabla S_\tau(\beta^{t, k}, \delta^t )/(L_{t, k}+\epsilon)$;
          Compute $\beta^{t, k+1}=\text{sign}\left(\widetilde{\beta}^{t, k}\right)\circ\max\left\{|\widetilde{\beta}^{t, k}|-\frac{\mu^t }{L_{t, k}+\epsilon}w^t, 0\right\}$.
     End
     Update $\delta^{t+1}=\gamma\delta^t $ and $\beta^t =\beta^{t, k+1}$;
     Update $w^{t+1}$ from $\beta^{t-1}$ and $\beta^t $ based on (46), (47) and (48);
End
Table 2.  $n=512, m=128$
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$ 0.0063 0.0078 0 1.0000 2.7474
$LN(0, \sigma^2)$ 0.0146 0.017700.94293.0230
$0.3$$N(0, \sigma^2)$0.00670.009101.00002.4950
$LN(0, \sigma^2)$0.01670.020900.87622.2424
$0.5$$N(0, \sigma^2)$0.00820.009701.00002.2793
$LN(0, \sigma^2)$0.01750.020100.85632.1709
$0.7$$N(0, \sigma^2)$0.00690.009301.00002.4489
$LN(0, \sigma^2)$0.02080.025800.80282.4478
$0.9$$N(0, \sigma^2)$0.00620.007701.00002.6616
$LN(0, \sigma^2)$0.02430.029900.62542.7624
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$ 0.0063 0.0078 0 1.0000 2.7474
$LN(0, \sigma^2)$ 0.0146 0.017700.94293.0230
$0.3$$N(0, \sigma^2)$0.00670.009101.00002.4950
$LN(0, \sigma^2)$0.01670.020900.87622.2424
$0.5$$N(0, \sigma^2)$0.00820.009701.00002.2793
$LN(0, \sigma^2)$0.01750.020100.85632.1709
$0.7$$N(0, \sigma^2)$0.00690.009301.00002.4489
$LN(0, \sigma^2)$0.02080.025800.80282.4478
$0.9$$N(0, \sigma^2)$0.00620.007701.00002.6616
$LN(0, \sigma^2)$0.02430.029900.62542.7624
Table 3.  $n=1024, m=256$
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00690.009101.000013.3403
$LN(0, \sigma^2)$0.01940.025700.981815.4523
$0.3$$N(0, \sigma^2)$0.00700.009301.000010.1645
$LN(0, \sigma^2)$0.01930.024401.000011.2862
$0.5$$N(0, \sigma^2)$0.00760.009601.000011.4844
$LN(0, \sigma^2)$0.02010.025201.000011.0627
$0.7$$N(0, \sigma^2)$0.00740.009701.000012.2611
$LN(0, \sigma^2)$0.02060.026400.981812.3306
$0.9$$N(0, \sigma^2)$0.00700.009301.000013.6169
$LN(0, \sigma^2)$0.02260.028600.953814.1217
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00690.009101.000013.3403
$LN(0, \sigma^2)$0.01940.025700.981815.4523
$0.3$$N(0, \sigma^2)$0.00700.009301.000010.1645
$LN(0, \sigma^2)$0.01930.024401.000011.2862
$0.5$$N(0, \sigma^2)$0.00760.009601.000011.4844
$LN(0, \sigma^2)$0.02010.025201.000011.0627
$0.7$$N(0, \sigma^2)$0.00740.009701.000012.2611
$LN(0, \sigma^2)$0.02060.026400.981812.3306
$0.9$$N(0, \sigma^2)$0.00700.009301.000013.6169
$LN(0, \sigma^2)$0.02260.028600.953814.1217
Table 4.  $n=512, m=128$
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00710.009601.00003.3895
$LN(0, \sigma^2)$0.01850.024201.00002.5908
$0.3$$N(0, \sigma^2)$0.00710.009801.00002.6341
$LN(0, \sigma^2)$0.01890.024601.00002.1094
$0.5$$N(0, \sigma^2)$0.00710.009401.00001.7425
$LN(0, \sigma^2)$0.01840.022401.00001.3525
$0.7$$N(0, \sigma^2)$0.00700.009501.00001.6148
$LN(0, \sigma^2)$0.01960.024900.96671.3640
$0.9$$N(0, \sigma^2)$0.00720.009501.00002.3751
$LN(0, \sigma^2)$0.01860.022701.00002.8742
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00710.009601.00003.3895
$LN(0, \sigma^2)$0.01850.024201.00002.5908
$0.3$$N(0, \sigma^2)$0.00710.009801.00002.6341
$LN(0, \sigma^2)$0.01890.024601.00002.1094
$0.5$$N(0, \sigma^2)$0.00710.009401.00001.7425
$LN(0, \sigma^2)$0.01840.022401.00001.3525
$0.7$$N(0, \sigma^2)$0.00700.009501.00001.6148
$LN(0, \sigma^2)$0.01960.024900.96671.3640
$0.9$$N(0, \sigma^2)$0.00720.009501.00002.3751
$LN(0, \sigma^2)$0.01860.022701.00002.8742
Table 5.  $n=1024, m=256$
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00710.009601.000017.5535
$LN(0, \sigma^2)$0.01860.024601.000013.5658
$0.3$$N(0, \sigma^2)$0.00740.009301.000013.3480
$LN(0, \sigma^2)$0.01960.026101.00008.8331
$0.5$$N(0, \sigma^2)$0.00760.009401.00009.8347
$LN(0, \sigma^2)$0.01830.024101.00007.6007
$0.7$$N(0, \sigma^2)$0.00690.009301.000013.6823
$LN(0, \sigma^2)$0.02000.027201.00007.9791
$0.9$$N(0, \sigma^2)$0.00720.009401.000018.5440
$LN(0, \sigma^2)$0.01840.024901.000014.3975
$\tau$Noise$\|\widehat{\beta}-\beta^*\|_2$$\frac{1}{\sqrt{m}}\|X\widehat{\beta}-X\beta^*\|_2$FPRTPRTime(s)
$0.1$$N(0, \sigma^2)$0.00710.009601.000017.5535
$LN(0, \sigma^2)$0.01860.024601.000013.5658
$0.3$$N(0, \sigma^2)$0.00740.009301.000013.3480
$LN(0, \sigma^2)$0.01960.026101.00008.8331
$0.5$$N(0, \sigma^2)$0.00760.009401.00009.8347
$LN(0, \sigma^2)$0.01830.024101.00007.6007
$0.7$$N(0, \sigma^2)$0.00690.009301.000013.6823
$LN(0, \sigma^2)$0.02000.027201.00007.9791
$0.9$$N(0, \sigma^2)$0.00720.009401.000018.5440
$LN(0, \sigma^2)$0.01840.024901.000014.3975
[1]

Pengbo Geng, Wengu Chen. Unconstrained $ \ell_1 $-$ \ell_2 $ minimization for sparse recovery via mutual coherence. Mathematical Foundations of Computing, 2020, 3 (2) : 65-79. doi: 10.3934/mfc.2020006

[2]

Lican Kang, Yanming Lai, Yanyan Liu, Yuan Luo, Jing Zhang. High-dimensional linear regression with hard thresholding regularization: Theory and algorithm. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022034

[3]

Qia Li, Na Zhang. Capped $\ell_p$ approximations for the composite $\ell_0$ regularization problem. Inverse Problems and Imaging, 2018, 12 (5) : 1219-1243. doi: 10.3934/ipi.2018051

[4]

Huining Qiu, Xiaoming Chen, Wanquan Liu, Guanglu Zhou, Yiju Wang, Jianhuang Lai. A fast $\ell_1$-solver and its applications to robust face recognition. Journal of Industrial and Management Optimization, 2012, 8 (1) : 163-178. doi: 10.3934/jimo.2012.8.163

[5]

Z.Y. Wu, H.W.J. Lee, F.S. Bai, L.S. Zhang. Quadratic smoothing approximation to $l_1$ exact penalty function in global optimization. Journal of Industrial and Management Optimization, 2005, 1 (4) : 533-547. doi: 10.3934/jimo.2005.1.533

[6]

Hui Huang, Eldad Haber, Lior Horesh. Optimal estimation of $\ell_1$-regularization prior from a regularized empirical Bayesian risk standpoint. Inverse Problems and Imaging, 2012, 6 (3) : 447-464. doi: 10.3934/ipi.2012.6.447

[7]

Peili Li, Xiliang Lu, Yunhai Xiao. Smoothing Newton method for $ \ell^0 $-$ \ell^2 $ regularized linear inverse problem. Inverse Problems and Imaging, 2022, 16 (1) : 153-177. doi: 10.3934/ipi.2021044

[8]

Ying Lin, Rongrong Lin, Qi Ye. Sparse regularized learning in the reproducing kernel banach spaces with the $ \ell^1 $ norm. Mathematical Foundations of Computing, 2020, 3 (3) : 205-218. doi: 10.3934/mfc.2020020

[9]

Hansol Park. Generalization of the Winfree model to the high-dimensional sphere and its emergent dynamics. Discrete and Continuous Dynamical Systems, 2022, 42 (2) : 707-735. doi: 10.3934/dcds.2021134

[10]

Regina S. Burachik, C. Yalçın Kaya. An update rule and a convergence result for a penalty function method. Journal of Industrial and Management Optimization, 2007, 3 (2) : 381-398. doi: 10.3934/jimo.2007.3.381

[11]

Takashi Hara and Gordon Slade. The incipient infinite cluster in high-dimensional percolation. Electronic Research Announcements, 1998, 4: 48-55.

[12]

Jun Wang, Xing Tao Wang. Sparse signal reconstruction via the approximations of $ \ell_{0} $ quasinorm. Journal of Industrial and Management Optimization, 2020, 16 (4) : 1907-1925. doi: 10.3934/jimo.2019035

[13]

Ahmet Sahiner, Gulden Kapusuz, Nurullah Yilmaz. A new smoothing approach to exact penalty functions for inequality constrained optimization problems. Numerical Algebra, Control and Optimization, 2016, 6 (2) : 161-173. doi: 10.3934/naco.2016006

[14]

Yibing Lv, Zhongping Wan. Linear bilevel multiobjective optimization problem: Penalty approach. Journal of Industrial and Management Optimization, 2019, 15 (3) : 1213-1223. doi: 10.3934/jimo.2018092

[15]

Yanqing Liu, Jiyuan Tao, Huan Zhang, Xianchao Xiu, Lingchen Kong. Fused LASSO penalized least absolute deviation estimator for high dimensional linear regression. Numerical Algebra, Control and Optimization, 2018, 8 (1) : 97-117. doi: 10.3934/naco.2018006

[16]

Zhiqing Meng, Qiying Hu, Chuangyin Dang. A penalty function algorithm with objective parameters for nonlinear mathematical programming. Journal of Industrial and Management Optimization, 2009, 5 (3) : 585-601. doi: 10.3934/jimo.2009.5.585

[17]

Cheng Ma, Xun Li, Ka-Fai Cedric Yiu, Yongjian Yang, Liansheng Zhang. On an exact penalty function method for semi-infinite programming problems. Journal of Industrial and Management Optimization, 2012, 8 (3) : 705-726. doi: 10.3934/jimo.2012.8.705

[18]

Changjun Yu, Kok Lay Teo, Liansheng Zhang, Yanqin Bai. A new exact penalty function method for continuous inequality constrained optimization problems. Journal of Industrial and Management Optimization, 2010, 6 (4) : 895-910. doi: 10.3934/jimo.2010.6.895

[19]

Wayne B. Hayes, Kenneth R. Jackson, Carmen Young. Rigorous high-dimensional shadowing using containment: The general case. Discrete and Continuous Dynamical Systems, 2006, 14 (2) : 329-342. doi: 10.3934/dcds.2006.14.329

[20]

Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. Discrete and Continuous Dynamical Systems, 2017, 37 (12) : 6099-6121. doi: 10.3934/dcds.2017262

2020 Impact Factor: 1.801

Metrics

  • PDF downloads (320)
  • HTML views (426)
  • Cited by (1)

[Back to Top]