
-
Previous Article
The application of model predictive control on stock portfolio optimization with prediction based on Geometric Brownian Motion-Kalman Filter
- JIMO Home
- This Issue
-
Next Article
Optimality conditions of singular controls for systems with Caputo fractional derivatives
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the “Online First” tab for the selected journal.
A new hybrid $ l_p $-$ l_2 $ model for sparse solutions with applications to image processing
1. | Department of Mathematics, Shanghai University, Shanghai 200444, China |
2. | Edward P. Fitts Department of Industrial and, Systems Engineering, North Carolina State University, Raleigh, NC 27695-7906, USA |
3. | School of Management Science and, Engineering, Dongbei University of Finance and Economics, Dalian 116025, China |
4. | School of Mathematics, Physics and, Statistics, Shanghai University of Engineering Science, Shanghai 201620, China |
Finding sparse solutions to a linear system has many real-world applications. In this paper, we study a new hybrid of the $ l_p $ quasi-norm ($ 0 <p< 1 $) and $ l_2 $ norm to approximate the $ l_0 $ norm and propose a new model for sparse optimization. The optimality conditions of the proposed model are carefully analyzed for constructing a partial linear approximation fixed-point algorithm. A convergence proof of the algorithm is provided. Computational experiments on image recovery and deblurring problems clearly confirm the superiority of the proposed model over several state-of-the-art models in terms of the signal-to-noise ratio and computational time.
References:
[1] |
E. Amaldi and V. Kann,
On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, Theoret. Comput. Sci., 209 (1998), 237-260.
doi: 10.1016/S0304-3975(97)00115-1. |
[2] |
A. Beck and M. Teboulle,
A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.
doi: 10.1137/080716542. |
[3] |
A. M. Bruckstein, D. L. Donoho and M. Elad,
From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Rev., 51 (2009), 34-81.
doi: 10.1137/060657704. |
[4] |
W. F. Cao, J. Sun and Z. B. Xu,
Fast image deconvolution using closed-form thresholding formulas of $L_q(q=\frac{1}{2}, \frac{2}{3})$ regularization, Journal of Visual Communication and Image Representation, 24 (2013), 31-41.
|
[5] |
E. J. Candès,
The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris, 346 (2008), 589-592.
doi: 10.1016/j.crma.2008.03.014. |
[6] |
E. J. Candès and T. Tao,
Decoding by linear programming, IEEE Trans. Inform. Theory, 51 (2005), 4203-4215.
doi: 10.1109/TIT.2005.858979. |
[7] |
E. J. Candès, M. B. Wakin and S. P. Boyd,
Enhancing sparsity by reweighted $l_1$ minimization, J. Fourier Anal. Appl., 14 (2008), 877-905.
doi: 10.1007/s00041-008-9045-x. |
[8] |
X. J. Chen,
Smoothing methods for nonsmooth, nonconvex minimization, Math. Program., 134 (2012), 71-99.
doi: 10.1007/s10107-012-0569-0. |
[9] |
X. J. Chen, D. D. Ge, Z. Z. Wang and Y. Y. Ye,
Complexity of unconstrained $L_2-L_p$ minimization, Math. Program., 143 (2014), 371-383.
doi: 10.1007/s10107-012-0613-0. |
[10] |
R. A. DeVore, B. Jawerth and B. J. Lucier,
Image compression through wavelet transform coding, IEEE Trans. Inform. Theory, 38 (1992), 719-746.
doi: 10.1109/18.119733. |
[11] |
D. L. Donoho,
De-noising by soft-thresholding, IEEE Trans. Inform. Theory, 41 (1995), 613-627.
doi: 10.1109/18.382009. |
[12] |
D. L. Donoho,
Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.
doi: 10.1109/TIT.2006.871582. |
[13] |
M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly and R. Baraniuk,
Single-pixel imaging via compressive sampling, IEEE Signal Processing Magazine, 25 (2008), 83-91.
doi: 10.1109/MSP.2007.914730. |
[14] |
E. Elhamifar and R. Vidal,
Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2765-2781.
doi: 10.1109/TPAMI.2013.57. |
[15] |
J. Q. Fan and R. Z. Li,
Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Statist. Assoc., 96 (2001), 1348-1360.
doi: 10.1198/016214501753382273. |
[16] |
D. Foster and E. George,
The risk inflation criterion for multiple regression, Ann. Statist., 22 (1994), 1947-1975.
doi: 10.1214/aos/1176325766. |
[17] |
X. R. Gao, Y. Q. Bai and Q. Li,
A sparse optimization problem with hybrid $L_2$-$L_p$ regularization for application of magnetic resonance brain images, J. Combinatorial Optimization, 42 (2019), 760-784.
doi: 10.1007/s10878-019-00479-x. |
[18] |
S. Jiang, S.-C. Fang and Q. W. Jin,
Sparse solutions by a quadratically constrained $l_q (0 < q < 1)$ minimization model, Informs J. Comput., 33 (2021), 511-530.
doi: 10.1287/ijoc.2020.1004. |
[19] |
S. Jiang, S.-C. Fang, T. T. Nie and W. X. Xing,
A gradient descent based algorithm for $l_p$ minimization, European J. Oper. Res., 283 (2020), 47-56.
doi: 10.1016/j.ejor.2019.11.051. |
[20] |
M. J. Lai and J. Y. Wang,
An unconstrained $l_q$ minimization with $0 < q \leq 1$ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101.
doi: 10.1137/090775397. |
[21] |
M. J. Lai, Y. Xu and W. T. Yin,
Improved iteratively reweighted least squares for unconstrained smoothed $l_q$ minimization, SIAM J. Numer. Anal., 51 (2013), 927-957.
doi: 10.1137/110840364. |
[22] |
Q. Li, Y. Bai, C. Yu and Y.-X. Yuan,
A new piecewise quadratic approximation approach for $L_0$ norm minimization problem, Sci. China Math., 62 (2019), 185-204.
doi: 10.1007/s11425-017-9315-9. |
[23] |
Y. F. Lou, P. H. Yin, Q. He and J. Xin,
Computing sparse representation in a highly coherent dictionary based on difference of $L_1$ and $L_2$, J. Sci. Comput., 64 (2015), 178-196.
doi: 10.1007/s10915-014-9930-1. |
[24] |
N. Meinshausen and B. Yu,
Lasso-type recovery of sparse representations for high-dimensional data, Ann. Statist., 37 (2009), 246-270.
doi: 10.1214/07-AOS582. |
[25] |
D. Merhej, C. Diab, M. Khalil and R. Prost,
Embedding prior knowledge within compressed sensing by neural networks, IEEE Transactions on Neural Networks, 22 (2011), 1638-1649.
doi: 10.1109/TNN.2011.2164810. |
[26] |
B. Natraajan,
Sparse approximate solutions to linear systems, SIAM J. Comput., 24 (1995), 227-234.
doi: 10.1137/S0097539792240406. |
[27] |
I. Selesnick,
Sparse regularization via convex analysis, IEEE Trans. Signal Process., 65 (2017), 4481-4494.
doi: 10.1109/TSP.2017.2711501. |
[28] |
H. Takeda, S. Farsiu and P. Milanfar,
Deblurring using regularized locally adaptive kernel regression, IEEE Trans. Image Process., 17 (2008), 550-563.
doi: 10.1109/TIP.2007.918028. |
[29] |
R. Tibshirani,
Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288.
doi: 10.1111/j.2517-6161.1996.tb02080.x. |
[30] |
Y. Wang, W. Q. Liu and G. L. Zhou,
An efficient algorithm for non-convex sparse optimization, J. Ind. Manag. Optim., 15 (2019), 2009-2021.
doi: 10.3934/jimo.2018134. |
[31] |
Z. B. Xu, X. Y. Chang, F. M. Xu and H. Zhang,
$L_{1/2}$ regularization: A thresholding representation theory and a fast solver, IEEE Transactions on Neural Networks and Learning Systems, 23 (2012), 1013-1027.
|
[32] |
B. C. Zhang, W. Hong and Y. R. Wu,
Sparse microwave imaging: Principles and applications, Sci. China Inf. Sci., 55 (2012), 1722-1754.
doi: 10.1007/s11432-012-4633-4. |
[33] |
C. Zhang, J. J. Wang and N. H. Xiu,
Robust and sparse portfolio model for index tracking, J. Ind. Manag. Optim., 15 (2019), 1001-1015.
|
[34] |
H. Zou,
The adaptive lasso and its oracle properties, J. Amer. Statist. Assoc., 101 (2006), 1418-1429.
doi: 10.1198/016214506000000735. |
show all references
References:
[1] |
E. Amaldi and V. Kann,
On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, Theoret. Comput. Sci., 209 (1998), 237-260.
doi: 10.1016/S0304-3975(97)00115-1. |
[2] |
A. Beck and M. Teboulle,
A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.
doi: 10.1137/080716542. |
[3] |
A. M. Bruckstein, D. L. Donoho and M. Elad,
From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Rev., 51 (2009), 34-81.
doi: 10.1137/060657704. |
[4] |
W. F. Cao, J. Sun and Z. B. Xu,
Fast image deconvolution using closed-form thresholding formulas of $L_q(q=\frac{1}{2}, \frac{2}{3})$ regularization, Journal of Visual Communication and Image Representation, 24 (2013), 31-41.
|
[5] |
E. J. Candès,
The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris, 346 (2008), 589-592.
doi: 10.1016/j.crma.2008.03.014. |
[6] |
E. J. Candès and T. Tao,
Decoding by linear programming, IEEE Trans. Inform. Theory, 51 (2005), 4203-4215.
doi: 10.1109/TIT.2005.858979. |
[7] |
E. J. Candès, M. B. Wakin and S. P. Boyd,
Enhancing sparsity by reweighted $l_1$ minimization, J. Fourier Anal. Appl., 14 (2008), 877-905.
doi: 10.1007/s00041-008-9045-x. |
[8] |
X. J. Chen,
Smoothing methods for nonsmooth, nonconvex minimization, Math. Program., 134 (2012), 71-99.
doi: 10.1007/s10107-012-0569-0. |
[9] |
X. J. Chen, D. D. Ge, Z. Z. Wang and Y. Y. Ye,
Complexity of unconstrained $L_2-L_p$ minimization, Math. Program., 143 (2014), 371-383.
doi: 10.1007/s10107-012-0613-0. |
[10] |
R. A. DeVore, B. Jawerth and B. J. Lucier,
Image compression through wavelet transform coding, IEEE Trans. Inform. Theory, 38 (1992), 719-746.
doi: 10.1109/18.119733. |
[11] |
D. L. Donoho,
De-noising by soft-thresholding, IEEE Trans. Inform. Theory, 41 (1995), 613-627.
doi: 10.1109/18.382009. |
[12] |
D. L. Donoho,
Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.
doi: 10.1109/TIT.2006.871582. |
[13] |
M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly and R. Baraniuk,
Single-pixel imaging via compressive sampling, IEEE Signal Processing Magazine, 25 (2008), 83-91.
doi: 10.1109/MSP.2007.914730. |
[14] |
E. Elhamifar and R. Vidal,
Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2765-2781.
doi: 10.1109/TPAMI.2013.57. |
[15] |
J. Q. Fan and R. Z. Li,
Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Statist. Assoc., 96 (2001), 1348-1360.
doi: 10.1198/016214501753382273. |
[16] |
D. Foster and E. George,
The risk inflation criterion for multiple regression, Ann. Statist., 22 (1994), 1947-1975.
doi: 10.1214/aos/1176325766. |
[17] |
X. R. Gao, Y. Q. Bai and Q. Li,
A sparse optimization problem with hybrid $L_2$-$L_p$ regularization for application of magnetic resonance brain images, J. Combinatorial Optimization, 42 (2019), 760-784.
doi: 10.1007/s10878-019-00479-x. |
[18] |
S. Jiang, S.-C. Fang and Q. W. Jin,
Sparse solutions by a quadratically constrained $l_q (0 < q < 1)$ minimization model, Informs J. Comput., 33 (2021), 511-530.
doi: 10.1287/ijoc.2020.1004. |
[19] |
S. Jiang, S.-C. Fang, T. T. Nie and W. X. Xing,
A gradient descent based algorithm for $l_p$ minimization, European J. Oper. Res., 283 (2020), 47-56.
doi: 10.1016/j.ejor.2019.11.051. |
[20] |
M. J. Lai and J. Y. Wang,
An unconstrained $l_q$ minimization with $0 < q \leq 1$ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101.
doi: 10.1137/090775397. |
[21] |
M. J. Lai, Y. Xu and W. T. Yin,
Improved iteratively reweighted least squares for unconstrained smoothed $l_q$ minimization, SIAM J. Numer. Anal., 51 (2013), 927-957.
doi: 10.1137/110840364. |
[22] |
Q. Li, Y. Bai, C. Yu and Y.-X. Yuan,
A new piecewise quadratic approximation approach for $L_0$ norm minimization problem, Sci. China Math., 62 (2019), 185-204.
doi: 10.1007/s11425-017-9315-9. |
[23] |
Y. F. Lou, P. H. Yin, Q. He and J. Xin,
Computing sparse representation in a highly coherent dictionary based on difference of $L_1$ and $L_2$, J. Sci. Comput., 64 (2015), 178-196.
doi: 10.1007/s10915-014-9930-1. |
[24] |
N. Meinshausen and B. Yu,
Lasso-type recovery of sparse representations for high-dimensional data, Ann. Statist., 37 (2009), 246-270.
doi: 10.1214/07-AOS582. |
[25] |
D. Merhej, C. Diab, M. Khalil and R. Prost,
Embedding prior knowledge within compressed sensing by neural networks, IEEE Transactions on Neural Networks, 22 (2011), 1638-1649.
doi: 10.1109/TNN.2011.2164810. |
[26] |
B. Natraajan,
Sparse approximate solutions to linear systems, SIAM J. Comput., 24 (1995), 227-234.
doi: 10.1137/S0097539792240406. |
[27] |
I. Selesnick,
Sparse regularization via convex analysis, IEEE Trans. Signal Process., 65 (2017), 4481-4494.
doi: 10.1109/TSP.2017.2711501. |
[28] |
H. Takeda, S. Farsiu and P. Milanfar,
Deblurring using regularized locally adaptive kernel regression, IEEE Trans. Image Process., 17 (2008), 550-563.
doi: 10.1109/TIP.2007.918028. |
[29] |
R. Tibshirani,
Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288.
doi: 10.1111/j.2517-6161.1996.tb02080.x. |
[30] |
Y. Wang, W. Q. Liu and G. L. Zhou,
An efficient algorithm for non-convex sparse optimization, J. Ind. Manag. Optim., 15 (2019), 2009-2021.
doi: 10.3934/jimo.2018134. |
[31] |
Z. B. Xu, X. Y. Chang, F. M. Xu and H. Zhang,
$L_{1/2}$ regularization: A thresholding representation theory and a fast solver, IEEE Transactions on Neural Networks and Learning Systems, 23 (2012), 1013-1027.
|
[32] |
B. C. Zhang, W. Hong and Y. R. Wu,
Sparse microwave imaging: Principles and applications, Sci. China Inf. Sci., 55 (2012), 1722-1754.
doi: 10.1007/s11432-012-4633-4. |
[33] |
C. Zhang, J. J. Wang and N. H. Xiu,
Robust and sparse portfolio model for index tracking, J. Ind. Manag. Optim., 15 (2019), 1001-1015.
|
[34] |
H. Zou,
The adaptive lasso and its oracle properties, J. Amer. Statist. Assoc., 101 (2006), 1418-1429.
doi: 10.1198/016214506000000735. |






Method | ||||||||||
SNR | time | Iter | SNR | time | Iter | SNR | time | Iter | ||
Soft | 136.60 | 77.70 | 813 | 122.18 | 103.09 | 1000 | 89.72 | 94.73 | 1000 | |
Half | 173.20 | 10.20 | 94 | 169.09 | 15.87 | 139 | 161.68 | 33.04 | 306 | |
PQA | 160.46 | 18.36 | 183 | 155.84 | 27.23 | 285 | 52.80 | 42.77 | 438 | |
H |
173.80 | 10.06 | 94 | 169.43 | 13.63 | 135 | 162.50 | 28.59 | 298 | |
173.53 | 11.10 | 94 | 169.25 | 13.95 | 137 | 161.76 | 29.70 | 301 | ||
173.92 | 9.66 | 97 | 169.64 | 13.91 | 143 | 162.10 | 31.96 | 322 | ||
PLAFPA |
175.08 | 6.19 | 76 | 170.14 | 8.63 | 113 | 163.09 | 21.10 | 256 | |
175.53 | 6.36 | 77 | 171.70 | 9.57 | 113 | 163.14 | 20.31 | 251 | ||
175.35 | 6.30 | 80 | 171.58 | 9.86 | 121 | 163.99 | 21.98 | 279 |
Method | ||||||||||
SNR | time | Iter | SNR | time | Iter | SNR | time | Iter | ||
Soft | 136.60 | 77.70 | 813 | 122.18 | 103.09 | 1000 | 89.72 | 94.73 | 1000 | |
Half | 173.20 | 10.20 | 94 | 169.09 | 15.87 | 139 | 161.68 | 33.04 | 306 | |
PQA | 160.46 | 18.36 | 183 | 155.84 | 27.23 | 285 | 52.80 | 42.77 | 438 | |
H |
173.80 | 10.06 | 94 | 169.43 | 13.63 | 135 | 162.50 | 28.59 | 298 | |
173.53 | 11.10 | 94 | 169.25 | 13.95 | 137 | 161.76 | 29.70 | 301 | ||
173.92 | 9.66 | 97 | 169.64 | 13.91 | 143 | 162.10 | 31.96 | 322 | ||
PLAFPA |
175.08 | 6.19 | 76 | 170.14 | 8.63 | 113 | 163.09 | 21.10 | 256 | |
175.53 | 6.36 | 77 | 171.70 | 9.57 | 113 | 163.14 | 20.31 | 251 | ||
175.35 | 6.30 | 80 | 171.58 | 9.86 | 121 | 163.99 | 21.98 | 279 |
t | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | |
"Cameraman" | SNR | 12.10 | 12.80 | 13.05 | 13.15 | 13.18 | 13.19 | 13.17 | 13.15 | 13.12 | 13.09 |
time | 10.47 | 12.53 | 14.09 | 15.46 | 15.92 | 17.11 | 17.68 | 18.31 | 19.26 | 20.62 | |
"Lena" | t | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 |
SNR | 13.96 | 14.69 | 15.03 | 15.17 | 15.21 | 15.19 | 15.15 | 15.09 | 15.03 | 14.97 | |
time | 49.38 | 54.88 | 60.33 | 64.82 | 68.99 | 72.83 | 76.09 | 78.31 | 80.59 | 83.25 |
t | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | |
"Cameraman" | SNR | 12.10 | 12.80 | 13.05 | 13.15 | 13.18 | 13.19 | 13.17 | 13.15 | 13.12 | 13.09 |
time | 10.47 | 12.53 | 14.09 | 15.46 | 15.92 | 17.11 | 17.68 | 18.31 | 19.26 | 20.62 | |
"Lena" | t | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 |
SNR | 13.96 | 14.69 | 15.03 | 15.17 | 15.21 | 15.19 | 15.15 | 15.09 | 15.03 | 14.97 | |
time | 49.38 | 54.88 | 60.33 | 64.82 | 68.99 | 72.83 | 76.09 | 78.31 | 80.59 | 83.25 |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | |
Soft | SNR | 11.84 | 12.30 | 12.46 | 12.51 | 12.52 | 12.52 | 12.50 | 12.49 | 12.47 | 12.45 |
time | 1.68 | 2.75 | 3.98 | 4.95 | 6.12 | 7.29 | 8.43 | 9.51 | 10.57 | 11.84 | |
Half | Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 11.37 | 11.55 | 11.55 | 11.51 | 11.46 | 11.41 | 11.36 | 11.31 | 11.27 | 11.23 | |
time | 3.67 | 5.24 | 7.68 | 10.30 | 13.50 | 15.88 | 18.47 | 20.42 | 23.21 | 25.10 | |
PQA | Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 11.98 | 12.55 | 12.74 | 12.77 | 12.72 | 12.64 | 12.54 | 12.42 | 12.30 | 12.18 | |
time | 1.92 | 2.67 | 3.91 | 5.13 | 6.20 | 7.46 | 9.76 | 9.96 | 11.27 | 13.58 | |
H |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 11.99 | 12.60 | 12.81 | 12.88 | 12.87 | 12.82 | 12.76 | 12.68 | 12.59 | 12.50 | |
time | 6.25 | 11.89 | 16.77 | 21.83 | 26.35 | 29.46 | 34.17 | 38.54 | 42.34 | 46.18 | |
PLAFPA with |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 11.87 | 12.56 | 12.88 | 13.05 | 13.15 | 13.20 | 13.22 | 13.28 | 13.22 | 13.22 | |
time | 5.30 | 8.71 | 11.65 | 14.87 | 15.92 | 19.57 | 22.27 | 24.01 | 28.26 | 29.52 |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | |
Soft | SNR | 11.84 | 12.30 | 12.46 | 12.51 | 12.52 | 12.52 | 12.50 | 12.49 | 12.47 | 12.45 |
time | 1.68 | 2.75 | 3.98 | 4.95 | 6.12 | 7.29 | 8.43 | 9.51 | 10.57 | 11.84 | |
Half | Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 11.37 | 11.55 | 11.55 | 11.51 | 11.46 | 11.41 | 11.36 | 11.31 | 11.27 | 11.23 | |
time | 3.67 | 5.24 | 7.68 | 10.30 | 13.50 | 15.88 | 18.47 | 20.42 | 23.21 | 25.10 | |
PQA | Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 11.98 | 12.55 | 12.74 | 12.77 | 12.72 | 12.64 | 12.54 | 12.42 | 12.30 | 12.18 | |
time | 1.92 | 2.67 | 3.91 | 5.13 | 6.20 | 7.46 | 9.76 | 9.96 | 11.27 | 13.58 | |
H |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 11.99 | 12.60 | 12.81 | 12.88 | 12.87 | 12.82 | 12.76 | 12.68 | 12.59 | 12.50 | |
time | 6.25 | 11.89 | 16.77 | 21.83 | 26.35 | 29.46 | 34.17 | 38.54 | 42.34 | 46.18 | |
PLAFPA with |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 11.87 | 12.56 | 12.88 | 13.05 | 13.15 | 13.20 | 13.22 | 13.28 | 13.22 | 13.22 | |
time | 5.30 | 8.71 | 11.65 | 14.87 | 15.92 | 19.57 | 22.27 | 24.01 | 28.26 | 29.52 |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | |
Soft | SNR | 14.86 | 14.81 | 14.67 | 14.53 | 14.43 | 14.35 | 14.29 | 14.23 | 14.18 | 14.13 |
time | 7.68 | 15.21 | 21.94 | 27.80 | 37.62 | 42.08 | 48.75 | 56.46 | 63.33 | 69.26 | |
Half | Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 14.02 | 13.85 | 13.70 | 13.55 | 13.43 | 13.34 | 13.26 | 13.19 | 13.13 | 13.07 | |
time | 12.50 | 24.50 | 35.50 | 47.05 | 59.63 | 71.80 | 83.47 | 94.40 | 107.80 | 118.27 | |
PQA | Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 15.13 | 15.15 | 14.82 | 14.41 | 14.00 | 13.62 | 13.27 | 12.94 | 12.65 | 12.36 | |
time | 7.63 | 14.91 | 22.60 | 29.37 | 33.68 | 43.74 | 52.27 | 57.88 | 65.31 | 73.68 | |
H with |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 15.18 | 15.28 | 15.02 | 14.69 | 14.35 | 14.02 | 13.72 | 13.44 | 13.18 | 12.94 | |
time | 27.11 | 47.28 | 66.64 | 85.65 | 95.97 | 123.68 | 143.52 | 159.69 | 176.60 | 192.09 | |
PLAFPA with |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 15.13 | 15.45 | 15.43 | 15.33 | 15.21 | 15.09 | 14.96 | 14.84 | 14.73 | 14.62 | |
time | 24.76 | 38.04 | 50.43 | 62.75 | 68.99 | 85.64 | 95.82 | 105.12 | 116.96 | 120.00 |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | |
Soft | SNR | 14.86 | 14.81 | 14.67 | 14.53 | 14.43 | 14.35 | 14.29 | 14.23 | 14.18 | 14.13 |
time | 7.68 | 15.21 | 21.94 | 27.80 | 37.62 | 42.08 | 48.75 | 56.46 | 63.33 | 69.26 | |
Half | Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 14.02 | 13.85 | 13.70 | 13.55 | 13.43 | 13.34 | 13.26 | 13.19 | 13.13 | 13.07 | |
time | 12.50 | 24.50 | 35.50 | 47.05 | 59.63 | 71.80 | 83.47 | 94.40 | 107.80 | 118.27 | |
PQA | Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 15.13 | 15.15 | 14.82 | 14.41 | 14.00 | 13.62 | 13.27 | 12.94 | 12.65 | 12.36 | |
time | 7.63 | 14.91 | 22.60 | 29.37 | 33.68 | 43.74 | 52.27 | 57.88 | 65.31 | 73.68 | |
H with |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 15.18 | 15.28 | 15.02 | 14.69 | 14.35 | 14.02 | 13.72 | 13.44 | 13.18 | 12.94 | |
time | 27.11 | 47.28 | 66.64 | 85.65 | 95.97 | 123.68 | 143.52 | 159.69 | 176.60 | 192.09 | |
PLAFPA with |
Iter | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |
SNR | 15.13 | 15.45 | 15.43 | 15.33 | 15.21 | 15.09 | 14.96 | 14.84 | 14.73 | 14.62 | |
time | 24.76 | 38.04 | 50.43 | 62.75 | 68.99 | 85.64 | 95.82 | 105.12 | 116.96 | 120.00 |
[1] |
Yifu Feng, Min Zhang. A $p$-spherical section property for matrix Schatten-$p$ quasi-norm minimization. Journal of Industrial and Management Optimization, 2020, 16 (1) : 397-407. doi: 10.3934/jimo.2018159 |
[2] |
Pia Heins, Michael Moeller, Martin Burger. Locally sparse reconstruction using the $l^{1,\infty}$-norm. Inverse Problems and Imaging, 2015, 9 (4) : 1093-1137. doi: 10.3934/ipi.2015.9.1093 |
[3] |
Xavier Bresson, Tony F. Chan. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse Problems and Imaging, 2008, 2 (4) : 455-484. doi: 10.3934/ipi.2008.2.455 |
[4] |
Sijia Zhong, Daoyuan Fang. $L^2$-concentration phenomenon for Zakharov system below energy norm II. Communications on Pure and Applied Analysis, 2009, 8 (3) : 1117-1132. doi: 10.3934/cpaa.2009.8.1117 |
[5] |
Xiaoqing Ou, Suliman Al-Homidan, Qamrul Hasan Ansari, Jiawei Chen. Image space analysis for uncertain multiobjective optimization problems: Robust optimality conditions. Journal of Industrial and Management Optimization, 2021 doi: 10.3934/jimo.2021199 |
[6] |
Donglei Du, Xiaoyue Jiang, Guochuan Zhang. Optimal preemptive online scheduling to minimize lp norm on two processors. Journal of Industrial and Management Optimization, 2005, 1 (3) : 345-351. doi: 10.3934/jimo.2005.1.345 |
[7] |
Peter Weidemaier. Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed $L_p$-norm. Electronic Research Announcements, 2002, 8: 47-51. |
[8] |
Zhen-Zhen Tao, Bing Sun. Galerkin spectral method for elliptic optimal control problem with $L^2$-norm control constraint. Discrete and Continuous Dynamical Systems - B, 2022, 27 (8) : 4121-4141. doi: 10.3934/dcdsb.2021220 |
[9] |
Braxton Osting, Jérôme Darbon, Stanley Osher. Statistical ranking using the $l^{1}$-norm on graphs. Inverse Problems and Imaging, 2013, 7 (3) : 907-926. doi: 10.3934/ipi.2013.7.907 |
[10] |
Karina Samvelyan, Frol Zapolsky. Rigidity of the ${{L}^{p}}$-norm of the Poisson bracket on surfaces. Electronic Research Announcements, 2017, 24: 28-37. doi: 10.3934/era.2017.24.004 |
[11] |
Luis Caffarelli, Fanghua Lin. Nonlocal heat flows preserving the L2 energy. Discrete and Continuous Dynamical Systems, 2009, 23 (1&2) : 49-64. doi: 10.3934/dcds.2009.23.49 |
[12] |
Volodymyr O. Kapustyan, Ivan O. Pyshnograiev, Olena A. Kapustian. Quasi-optimal control with a general quadratic criterion in a special norm for systems described by parabolic-hyperbolic equations with non-local boundary conditions. Discrete and Continuous Dynamical Systems - B, 2019, 24 (3) : 1243-1258. doi: 10.3934/dcdsb.2019014 |
[13] |
Zhen-Zhen Tao, Bing Sun. Error estimates for spectral approximation of flow optimal control problem with $ L^2 $-norm control constraint. Journal of Industrial and Management Optimization, 2022 doi: 10.3934/jimo.2022030 |
[14] |
Duo Wang, Zheng-Fen Jin, Youlin Shang. A penalty decomposition method for nuclear norm minimization with l1 norm fidelity term. Evolution Equations and Control Theory, 2019, 8 (4) : 695-708. doi: 10.3934/eect.2019034 |
[15] |
Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control and Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017 |
[16] |
Heinz Schättler, Urszula Ledzewicz, Helmut Maurer. Sufficient conditions for strong local optimality in optimal control problems with $L_{2}$-type objectives and control constraints. Discrete and Continuous Dynamical Systems - B, 2014, 19 (8) : 2657-2679. doi: 10.3934/dcdsb.2014.19.2657 |
[17] |
P. R. Zingano. Asymptotic behavior of the $L^1$ norm of solutions to nonlinear parabolic equations. Communications on Pure and Applied Analysis, 2004, 3 (1) : 151-159. doi: 10.3934/cpaa.2004.3.151 |
[18] |
Ahmad Mousavi, Zheming Gao, Lanshan Han, Alvin Lim. Quadratic surface support vector machine with L1 norm regularization. Journal of Industrial and Management Optimization, 2022, 18 (3) : 1835-1861. doi: 10.3934/jimo.2021046 |
[19] |
Donglei Du, Tianping Shuai. Errata to:''Optimal preemptive online scheduling to minimize $l_{p}$ norm on two processors''[Journal of Industrial and Management Optimization, 1(3) (2005), 345-351.]. Journal of Industrial and Management Optimization, 2008, 4 (2) : 339-341. doi: 10.3934/jimo.2008.4.339 |
[20] |
Linfang Liu, Xianlong Fu. Existence and upper semicontinuity of (L2, Lq) pullback attractors for a stochastic p-laplacian equation. Communications on Pure and Applied Analysis, 2017, 6 (2) : 443-474. doi: 10.3934/cpaa.2017023 |
2020 Impact Factor: 1.801
Tools
Metrics
Other articles
by authors
[Back to Top]