August  2012, 6(3): 447-464. doi: 10.3934/ipi.2012.6.447

Optimal estimation of $\ell_1$-regularization prior from a regularized empirical Bayesian risk standpoint

1. 

Shenzhen Key Lab of Visual Computing and Visual Analytics, Shenzhen Institute of Advanced Technology, Shenzhen, Guangdong, 518055, China

2. 

Department of Mathematics and Earth and Ocean Science, The University of British Columbia, Vancouver, BC, V6T 1Z2, Canada

3. 

Business Analytics and Mathematical Sciences, IBM T J Watson Research Center, Yorktown Heights, NY, 10598, United States

Received  March 2011 Revised  April 2012 Published  September 2012

We address the problem of prior matrix estimation for the solution of $\ell_1$-regularized ill-posed inverse problems. From a Bayesian viewpoint, we show that such a matrix can be regarded as an influence matrix in a multivariate $\ell_1$-Laplace density function. Assuming a training set is given, the prior matrix design problem is cast as a maximum likelihood term with an additional sparsity-inducing term. This formulation results in an unconstrained yet nonconvex optimization problem. Memory requirements as well as computation of the nonlinear, nonsmooth sub-gradient equations are prohibitive for large-scale problems. Thus, we introduce an iterative algorithm to design efficient priors for such large problems. We further demonstrate that the solutions of ill-posed inverse problems by incorporation of $\ell_1$-regularization using the learned prior matrix perform generally better than commonly used regularization techniques where the prior matrix is chosen a-priori.
Citation: Hui Huang, Eldad Haber, Lior Horesh. Optimal estimation of $\ell_1$-regularization prior from a regularized empirical Bayesian risk standpoint. Inverse Problems & Imaging, 2012, 6 (3) : 447-464. doi: 10.3934/ipi.2012.6.447
References:
[1]

H. Akaike, A new look at the statistical model identification,, IEEE Trans. Auto. Cont., 19 (1974), 716. doi: 10.1109/TAC.1974.1100705. Google Scholar

[2]

A. Aravkin, An $l_1$-Laplace robust kalman smoother,, IEEE Trans. Auto. Cont., 56 (2011), 2898. doi: 10.1109/TAC.2011.2141430. Google Scholar

[3]

U. Ascher, E. Haber and H. Huang, On effective methods for implicit piecewise smooth surface recovery,, SIAM J. Scient. Comput., 28 (2006), 339. doi: 10.1137/040617261. Google Scholar

[4]

U. Ascher, K. van den Doel, H. Huang and B. Svaiter, Gradient descent and fast artificial time integration,, M2AN, 43 (2009), 689. Google Scholar

[5]

J. A. Bilmes, "Factored Sparse Inverse Covariance Matrices,", Proc. IEEE Intl. Conf. on Acous., (2000). Google Scholar

[6]

A. M. Bruckstein, D. L. Donoho and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images,, SIAM Review, 51 (2009), 34. doi: 10.1137/060657704. Google Scholar

[7]

E. J. Candès and D. L. Donoho, "Curvelets: A Surprisingly Effective Nonadaptive Representation for Objects with Edges,", 1999., (). Google Scholar

[8]

E. J. Candès and M. B. Wakin, An introduction to compressive sampling,, IEEE In Sig. Proc. Magaz., 25 (2008), 21. doi: 10.1109/MSP.2007.914731. Google Scholar

[9]

T. Chan and J. Shen, "Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods,", SIAM, (2005). Google Scholar

[10]

X. Chen and W. Zhou., Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization,, SIAM J. Imaging Sci., 3 (2010), 765. doi: 10.1137/080740167. Google Scholar

[11]

M. Cheney, D.Isaacson and J. C. Newell, Electrical impedance tomography,, SIAM Review, 41 (1999), 85. doi: 10.1137/S0036144598333613. Google Scholar

[12]

J. Dahl, L. Vandenberghe and V. Roychowdhury, Covariance selection for non-chordal graphs via chordal embedding,, Optim. Meth. and Soft., 23 (2008), 501. doi: 10.1080/10556780802102693. Google Scholar

[13]

Y. H. Dai and R. Fletcher, Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming,, Numer. Math., 100 (2005), 21. doi: 10.1007/s00211-004-0569-y. Google Scholar

[14]

A. d'Aspremont, O. Banerjee and L. E. Ghaoui, First-order methods for sparse covariance selection,, SIAM J. Matrix Anal. Appl., 30 (2008), 56. doi: 10.1137/060670985. Google Scholar

[15]

M. N. Do and M. Vetterli, The contourlet transform: an efficient directional multiresolution image representation,, IEEE Trans. on Image Proc., 14 (2005), 2091. doi: 10.1109/TIP.2005.859376. Google Scholar

[16]

W. Dong, L. Zhang, G. Shi and X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization,, IEEE Trans. Image Proc., 20 (2011), 1838. doi: 10.1109/TIP.2011.2108306. Google Scholar

[17]

M. Elad, P. Milanfar and R. Rubinstein, Analysis versus synthesis in signal priors,, Inver. Problems, 23 (2007), 947. doi: 10.1088/0266-5611/23/3/007. Google Scholar

[18]

K. Engan, K. Skretting and J. H. Husφy, Family of iterative ls-based dictionary learning algorithms, ils-dla, for sparse signal representation,, Digit. Sig. Proc., 17 (2007), 32. doi: 10.1016/j.dsp.2006.02.002. Google Scholar

[19]

M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,, IEEE J. on Sig. Proc., 1 (2007), 586. Google Scholar

[20]

M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,, IEEE J. in Sig. Proc., 1 (2007), 586. Google Scholar

[21]

J. Friedman, T. Hastie and R. Tibshirani, Sparse inverse covariance estimation with the graphical lasso,, Biostatistics, 9 (2008), 432. doi: 10.1093/biostatistics/kxm045. Google Scholar

[22]

A. Genkin, D. Lewis and D. Madigan, Large-scale bayesian logistic regression for text categorization,, Technometrics, 49 (2007), 291. doi: 10.1198/004017007000000245. Google Scholar

[23]

M. Girolami, A variational method for learning sparse and overcomplete representations,, Neu. Comput., 13 (2001), 2517. doi: 10.1162/089976601753196003. Google Scholar

[24]

G. H. Golub, M. T. Heath and G. Wahba, Generalized cross-validation as a method for choosing a good ridge parameter,, Technometrics, 21 (1979), 215. doi: 10.1080/00401706.1979.10489751. Google Scholar

[25]

E. Haber, U. Ascher and D. Oldenburg, Inversion of 3D electromagnetic data in frequency and time domain using an inexact all-at-once approach,, Geophysics, 69 (2004), 1216. Google Scholar

[26]

E. Haber, D. Oldenburg and A. Celler, Recovery of dynamic parameters in SPECT,, IEEE trans. on Nuc. Sci., 44 (1997), 2425. Google Scholar

[27]

P. C. Hansen, "Rank Deficient and Ill-Posed Problems,", SIAM, (1998). doi: 10.1137/1.9780898719697. Google Scholar

[28]

L. Horesh and E. Haber, Sensitivity computation of the $l_1$ minimization problem and its application to dictionary design of ill-posed problems,, Inver. Problems, 25 (2009). doi: 10.1088/0266-5611/25/9/095009. Google Scholar

[29]

N. Johnson and S. Kotz, "Continuous Multivariate Distributions,", Wiley, (1972). Google Scholar

[30]

S. M. Kakade, S. Shalev-Shwartz and A. Tewari, "Regularization Techniques for Learning with Matrices,", J. Mach. Learning Research, (2012). Google Scholar

[31]

S. Kotz, T. J. Kozubowski and K. Podgórski, "Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance,", Birkhäuser, (2001). Google Scholar

[32]

K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T. W. Lee and T. J. Sejnowski, Dictionary learning algorithms for sparse representation,, Neu. Comput., 15 (2003), 349. doi: 10.1162/089976603762552951. Google Scholar

[33]

G. Kutyniok, D. Labate, W. Q. Lim and G. Weiss, Sparse multidimensional representation using shearlets,, in, 5914 (2005), 254. doi: 10.1117/12.613494. Google Scholar

[34]

M. S. Lewicki and T. J. Sejnowski, Learning overcomplete representations,, Neu. Comput., 12 (2000), 337. doi: 10.1162/089976600300015826. Google Scholar

[35]

A. S. Lewis and M. L. Overton, "Nonsmooth Optimization Via BFGS,", SIAM J. Optim., (2011). Google Scholar

[36]

Z. Lu, Smooth optimization approach for sparse covariance selection,, SIAM J. Optim., 19 (2009), 807. doi: 10.1137/070695915. Google Scholar

[37]

J. Mairal, F. Bach, J. Ponce, G. Sapiro and A. Zisserman, Supervised dictionary learning,, 21 (2009), 21 (2009), 1033. Google Scholar

[38]

S. Mallat, "A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way,", Academic Press, (2008). Google Scholar

[39]

D. Mihovilovic and R. N. Bracewell, Adaptive chirplet representation of signals on time-frequency plane,, Electronics Letters, 27 (1991), 1159. doi: 10.1049/el:19910723. Google Scholar

[40]

J. F. Murray and K. Kreutz-Delgado, Learning sparse overcomplete codes for images,, J. VLSI Sig. Proc., 45 (2006), 97. doi: 10.1007/s11265-006-9774-5. Google Scholar

[41]

C. Papageorgiou and T. Poggio, "A Trainable Object Detection System: Car Detection in Static Images,", Technical Report 1673, (1673). Google Scholar

[42]

A. Pázman, "Foundations of Optimum Experimental Design,", Springer-Verlag, (1986). Google Scholar

[43]

E. Le Pennec and S. Mallat, Sparse geometric image representations with bandelets,, IEEE Trans. on Image Proc., 14 (2005), 423. doi: 10.1109/TIP.2005.843753. Google Scholar

[44]

G. Peyré and J. Fadili, Learning analysis sparsity priors,, in, (2011). Google Scholar

[45]

R. Rubinstein, M. Zibulevsky and M. Elad, Double sparsity: learning sparse dictionaries for sparse signal approximation,, IEEE Trans. on Sig. Proc., 58 (2010), 1553. Google Scholar

[46]

G. Sapiro, "Geometric Partial Differential Equations and Image Analysis,", Cambridge, (2001). Google Scholar

[47]

O. Scherzer, Scale-space methods and regularization for denoising and inverse problems,, Advances in Image and Elect. Phys., 128 (2003), 445. Google Scholar

[48]

G. Schwarz, Estimating the dimension of a model,, The Annals of Stat., 6 (1978), 461. doi: 10.1214/aos/1176344136. Google Scholar

[49]

D. F. Shanno and K. Phua, Matrix conditioning and nonlinear optimization,, Math. Program., 14 (1978), 149. doi: 10.1007/BF01588962. Google Scholar

[50]

V. Sindhwani and A. Ghoting, "Large-Scale Distributed Non-Negative Sparse Coding and Sparse Dictionary Learning,", SIGKDD, (2012). Google Scholar

[51]

M. Sugiyama and H. Ogawa, Optimal design of regularization term and regularization parameter by subspace information criterion,, Neu. Netw., 15 (2002), 349. doi: 10.1016/S0893-6080(02)00022-9. Google Scholar

[52]

R. Tibshirani, Regression shrinkage and selection via the lasso,, J. Royal. Statist. Soc B., 58 (1996), 267. Google Scholar

[53]

A. N. Tychonoff, Solution of incorrectly formulated problems and the regularization method,, Soviet Math. Dokl., 4 (1963), 1035. Google Scholar

[54]

E. van den Berg and M. P. Friedlander, Probing the pareto frontier for basis pursuit solutions,, SIAM J. Scient. Comput., 31 (2008), 890. doi: 10.1137/080714488. Google Scholar

[55]

V. Vapnik, "Statistical Learning Theory,", Wiley, (1998). Google Scholar

[56]

C. R. Vogel, "Computational Methods for Inverse Problem,", SIAM, (2002). doi: 10.1137/1.9780898717570. Google Scholar

[57]

H. Wang, B. Li and C. Leng, Shrinkage tuning parameter selection with a diverging number of parameters,, J. R. Statist. Soc. B, 71 (2009), 671. doi: 10.1111/j.1467-9868.2008.00693.x. Google Scholar

[58]

D. P. Wipf and B. D. Rao, Sparse bayesian learning for basis selection,, IEEE Trans. on Sig. Proc., 52 (2004), 2153. Google Scholar

[59]

Z. J. Xiang, H. Xu and P. J. Ramadge, Learning sparse representations of high dimensional data on large scale dictionaries,, 24 (2011), 24 (2011), 900. Google Scholar

[60]

X. Chen and C. Zhang, Smoothing projected gradient method and its application to stochastic linear complementarity problems,, SIAM J. Optim., 20 (2009), 627. doi: 10.1137/070702187. Google Scholar

show all references

References:
[1]

H. Akaike, A new look at the statistical model identification,, IEEE Trans. Auto. Cont., 19 (1974), 716. doi: 10.1109/TAC.1974.1100705. Google Scholar

[2]

A. Aravkin, An $l_1$-Laplace robust kalman smoother,, IEEE Trans. Auto. Cont., 56 (2011), 2898. doi: 10.1109/TAC.2011.2141430. Google Scholar

[3]

U. Ascher, E. Haber and H. Huang, On effective methods for implicit piecewise smooth surface recovery,, SIAM J. Scient. Comput., 28 (2006), 339. doi: 10.1137/040617261. Google Scholar

[4]

U. Ascher, K. van den Doel, H. Huang and B. Svaiter, Gradient descent and fast artificial time integration,, M2AN, 43 (2009), 689. Google Scholar

[5]

J. A. Bilmes, "Factored Sparse Inverse Covariance Matrices,", Proc. IEEE Intl. Conf. on Acous., (2000). Google Scholar

[6]

A. M. Bruckstein, D. L. Donoho and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images,, SIAM Review, 51 (2009), 34. doi: 10.1137/060657704. Google Scholar

[7]

E. J. Candès and D. L. Donoho, "Curvelets: A Surprisingly Effective Nonadaptive Representation for Objects with Edges,", 1999., (). Google Scholar

[8]

E. J. Candès and M. B. Wakin, An introduction to compressive sampling,, IEEE In Sig. Proc. Magaz., 25 (2008), 21. doi: 10.1109/MSP.2007.914731. Google Scholar

[9]

T. Chan and J. Shen, "Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods,", SIAM, (2005). Google Scholar

[10]

X. Chen and W. Zhou., Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization,, SIAM J. Imaging Sci., 3 (2010), 765. doi: 10.1137/080740167. Google Scholar

[11]

M. Cheney, D.Isaacson and J. C. Newell, Electrical impedance tomography,, SIAM Review, 41 (1999), 85. doi: 10.1137/S0036144598333613. Google Scholar

[12]

J. Dahl, L. Vandenberghe and V. Roychowdhury, Covariance selection for non-chordal graphs via chordal embedding,, Optim. Meth. and Soft., 23 (2008), 501. doi: 10.1080/10556780802102693. Google Scholar

[13]

Y. H. Dai and R. Fletcher, Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming,, Numer. Math., 100 (2005), 21. doi: 10.1007/s00211-004-0569-y. Google Scholar

[14]

A. d'Aspremont, O. Banerjee and L. E. Ghaoui, First-order methods for sparse covariance selection,, SIAM J. Matrix Anal. Appl., 30 (2008), 56. doi: 10.1137/060670985. Google Scholar

[15]

M. N. Do and M. Vetterli, The contourlet transform: an efficient directional multiresolution image representation,, IEEE Trans. on Image Proc., 14 (2005), 2091. doi: 10.1109/TIP.2005.859376. Google Scholar

[16]

W. Dong, L. Zhang, G. Shi and X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization,, IEEE Trans. Image Proc., 20 (2011), 1838. doi: 10.1109/TIP.2011.2108306. Google Scholar

[17]

M. Elad, P. Milanfar and R. Rubinstein, Analysis versus synthesis in signal priors,, Inver. Problems, 23 (2007), 947. doi: 10.1088/0266-5611/23/3/007. Google Scholar

[18]

K. Engan, K. Skretting and J. H. Husφy, Family of iterative ls-based dictionary learning algorithms, ils-dla, for sparse signal representation,, Digit. Sig. Proc., 17 (2007), 32. doi: 10.1016/j.dsp.2006.02.002. Google Scholar

[19]

M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,, IEEE J. on Sig. Proc., 1 (2007), 586. Google Scholar

[20]

M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,, IEEE J. in Sig. Proc., 1 (2007), 586. Google Scholar

[21]

J. Friedman, T. Hastie and R. Tibshirani, Sparse inverse covariance estimation with the graphical lasso,, Biostatistics, 9 (2008), 432. doi: 10.1093/biostatistics/kxm045. Google Scholar

[22]

A. Genkin, D. Lewis and D. Madigan, Large-scale bayesian logistic regression for text categorization,, Technometrics, 49 (2007), 291. doi: 10.1198/004017007000000245. Google Scholar

[23]

M. Girolami, A variational method for learning sparse and overcomplete representations,, Neu. Comput., 13 (2001), 2517. doi: 10.1162/089976601753196003. Google Scholar

[24]

G. H. Golub, M. T. Heath and G. Wahba, Generalized cross-validation as a method for choosing a good ridge parameter,, Technometrics, 21 (1979), 215. doi: 10.1080/00401706.1979.10489751. Google Scholar

[25]

E. Haber, U. Ascher and D. Oldenburg, Inversion of 3D electromagnetic data in frequency and time domain using an inexact all-at-once approach,, Geophysics, 69 (2004), 1216. Google Scholar

[26]

E. Haber, D. Oldenburg and A. Celler, Recovery of dynamic parameters in SPECT,, IEEE trans. on Nuc. Sci., 44 (1997), 2425. Google Scholar

[27]

P. C. Hansen, "Rank Deficient and Ill-Posed Problems,", SIAM, (1998). doi: 10.1137/1.9780898719697. Google Scholar

[28]

L. Horesh and E. Haber, Sensitivity computation of the $l_1$ minimization problem and its application to dictionary design of ill-posed problems,, Inver. Problems, 25 (2009). doi: 10.1088/0266-5611/25/9/095009. Google Scholar

[29]

N. Johnson and S. Kotz, "Continuous Multivariate Distributions,", Wiley, (1972). Google Scholar

[30]

S. M. Kakade, S. Shalev-Shwartz and A. Tewari, "Regularization Techniques for Learning with Matrices,", J. Mach. Learning Research, (2012). Google Scholar

[31]

S. Kotz, T. J. Kozubowski and K. Podgórski, "Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance,", Birkhäuser, (2001). Google Scholar

[32]

K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T. W. Lee and T. J. Sejnowski, Dictionary learning algorithms for sparse representation,, Neu. Comput., 15 (2003), 349. doi: 10.1162/089976603762552951. Google Scholar

[33]

G. Kutyniok, D. Labate, W. Q. Lim and G. Weiss, Sparse multidimensional representation using shearlets,, in, 5914 (2005), 254. doi: 10.1117/12.613494. Google Scholar

[34]

M. S. Lewicki and T. J. Sejnowski, Learning overcomplete representations,, Neu. Comput., 12 (2000), 337. doi: 10.1162/089976600300015826. Google Scholar

[35]

A. S. Lewis and M. L. Overton, "Nonsmooth Optimization Via BFGS,", SIAM J. Optim., (2011). Google Scholar

[36]

Z. Lu, Smooth optimization approach for sparse covariance selection,, SIAM J. Optim., 19 (2009), 807. doi: 10.1137/070695915. Google Scholar

[37]

J. Mairal, F. Bach, J. Ponce, G. Sapiro and A. Zisserman, Supervised dictionary learning,, 21 (2009), 21 (2009), 1033. Google Scholar

[38]

S. Mallat, "A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way,", Academic Press, (2008). Google Scholar

[39]

D. Mihovilovic and R. N. Bracewell, Adaptive chirplet representation of signals on time-frequency plane,, Electronics Letters, 27 (1991), 1159. doi: 10.1049/el:19910723. Google Scholar

[40]

J. F. Murray and K. Kreutz-Delgado, Learning sparse overcomplete codes for images,, J. VLSI Sig. Proc., 45 (2006), 97. doi: 10.1007/s11265-006-9774-5. Google Scholar

[41]

C. Papageorgiou and T. Poggio, "A Trainable Object Detection System: Car Detection in Static Images,", Technical Report 1673, (1673). Google Scholar

[42]

A. Pázman, "Foundations of Optimum Experimental Design,", Springer-Verlag, (1986). Google Scholar

[43]

E. Le Pennec and S. Mallat, Sparse geometric image representations with bandelets,, IEEE Trans. on Image Proc., 14 (2005), 423. doi: 10.1109/TIP.2005.843753. Google Scholar

[44]

G. Peyré and J. Fadili, Learning analysis sparsity priors,, in, (2011). Google Scholar

[45]

R. Rubinstein, M. Zibulevsky and M. Elad, Double sparsity: learning sparse dictionaries for sparse signal approximation,, IEEE Trans. on Sig. Proc., 58 (2010), 1553. Google Scholar

[46]

G. Sapiro, "Geometric Partial Differential Equations and Image Analysis,", Cambridge, (2001). Google Scholar

[47]

O. Scherzer, Scale-space methods and regularization for denoising and inverse problems,, Advances in Image and Elect. Phys., 128 (2003), 445. Google Scholar

[48]

G. Schwarz, Estimating the dimension of a model,, The Annals of Stat., 6 (1978), 461. doi: 10.1214/aos/1176344136. Google Scholar

[49]

D. F. Shanno and K. Phua, Matrix conditioning and nonlinear optimization,, Math. Program., 14 (1978), 149. doi: 10.1007/BF01588962. Google Scholar

[50]

V. Sindhwani and A. Ghoting, "Large-Scale Distributed Non-Negative Sparse Coding and Sparse Dictionary Learning,", SIGKDD, (2012). Google Scholar

[51]

M. Sugiyama and H. Ogawa, Optimal design of regularization term and regularization parameter by subspace information criterion,, Neu. Netw., 15 (2002), 349. doi: 10.1016/S0893-6080(02)00022-9. Google Scholar

[52]

R. Tibshirani, Regression shrinkage and selection via the lasso,, J. Royal. Statist. Soc B., 58 (1996), 267. Google Scholar

[53]

A. N. Tychonoff, Solution of incorrectly formulated problems and the regularization method,, Soviet Math. Dokl., 4 (1963), 1035. Google Scholar

[54]

E. van den Berg and M. P. Friedlander, Probing the pareto frontier for basis pursuit solutions,, SIAM J. Scient. Comput., 31 (2008), 890. doi: 10.1137/080714488. Google Scholar

[55]

V. Vapnik, "Statistical Learning Theory,", Wiley, (1998). Google Scholar

[56]

C. R. Vogel, "Computational Methods for Inverse Problem,", SIAM, (2002). doi: 10.1137/1.9780898717570. Google Scholar

[57]

H. Wang, B. Li and C. Leng, Shrinkage tuning parameter selection with a diverging number of parameters,, J. R. Statist. Soc. B, 71 (2009), 671. doi: 10.1111/j.1467-9868.2008.00693.x. Google Scholar

[58]

D. P. Wipf and B. D. Rao, Sparse bayesian learning for basis selection,, IEEE Trans. on Sig. Proc., 52 (2004), 2153. Google Scholar

[59]

Z. J. Xiang, H. Xu and P. J. Ramadge, Learning sparse representations of high dimensional data on large scale dictionaries,, 24 (2011), 24 (2011), 900. Google Scholar

[60]

X. Chen and C. Zhang, Smoothing projected gradient method and its application to stochastic linear complementarity problems,, SIAM J. Optim., 20 (2009), 627. doi: 10.1137/070702187. Google Scholar

[1]

Huining Qiu, Xiaoming Chen, Wanquan Liu, Guanglu Zhou, Yiju Wang, Jianhuang Lai. A fast $\ell_1$-solver and its applications to robust face recognition. Journal of Industrial & Management Optimization, 2012, 8 (1) : 163-178. doi: 10.3934/jimo.2012.8.163

[2]

Shaojun Lan, Yinghui Tang, Miaomiao Yu. System capacity optimization design and optimal threshold $N^{*}$ for a $GEO/G/1$ discrete-time queue with single server vacation and under the control of Min($N, V$)-policy. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1435-1464. doi: 10.3934/jimo.2016.12.1435

[3]

H. T. Banks, D. Rubio, N. Saintier, M. I. Troparevsky. Optimal design for parameter estimation in EEG problems in a 3D multilayered domain. Mathematical Biosciences & Engineering, 2015, 12 (4) : 739-760. doi: 10.3934/mbe.2015.12.739

[4]

Bin Li, Kok Lay Teo, Cheng-Chew Lim, Guang Ren Duan. An optimal PID controller design for nonlinear constrained optimal control problems. Discrete & Continuous Dynamical Systems - B, 2011, 16 (4) : 1101-1117. doi: 10.3934/dcdsb.2011.16.1101

[5]

El-Sayed M.E. Mostafa. A nonlinear conjugate gradient method for a special class of matrix optimization problems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 883-903. doi: 10.3934/jimo.2014.10.883

[6]

Qia Li, Na Zhang. Capped $\ell_p$ approximations for the composite $\ell_0$ regularization problem. Inverse Problems & Imaging, 2018, 12 (5) : 1219-1243. doi: 10.3934/ipi.2018051

[7]

Xueting Cui, Xiaoling Sun, Dan Sha. An empirical study on discrete optimization models for portfolio selection. Journal of Industrial & Management Optimization, 2009, 5 (1) : 33-46. doi: 10.3934/jimo.2009.5.33

[8]

Jinying Ma, Honglei Xu. Empirical analysis and optimization of capital structure adjustment. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-11. doi: 10.3934/jimo.2018191

[9]

Mohamed Aly Tawhid. Nonsmooth generalized complementarity as unconstrained optimization. Journal of Industrial & Management Optimization, 2010, 6 (2) : 411-423. doi: 10.3934/jimo.2010.6.411

[10]

Johnathan M. Bardsley. A theoretical framework for the regularization of Poisson likelihood estimation problems. Inverse Problems & Imaging, 2010, 4 (1) : 11-17. doi: 10.3934/ipi.2010.4.11

[11]

Wu Chanti, Qiu Youzhen. A nonlinear empirical analysis on influence factor of circulation efficiency. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 929-940. doi: 10.3934/dcdss.2019062

[12]

K. T. Arasu, Manil T. Mohan. Optimization problems with orthogonal matrix constraints. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 413-440. doi: 10.3934/naco.2018026

[13]

Giancarlo Bigi. Componentwise versus global approaches to nonsmooth multiobjective optimization. Journal of Industrial & Management Optimization, 2005, 1 (1) : 21-32. doi: 10.3934/jimo.2005.1.21

[14]

Nobuko Sagara, Masao Fukushima. trust region method for nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2005, 1 (2) : 171-180. doi: 10.3934/jimo.2005.1.171

[15]

Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. Journal of Industrial & Management Optimization, 2016, 12 (3) : 949-973. doi: 10.3934/jimo.2016.12.949

[16]

A. M. Bagirov, Moumita Ghosh, Dean Webb. A derivative-free method for linearly constrained nonsmooth optimization. Journal of Industrial & Management Optimization, 2006, 2 (3) : 319-338. doi: 10.3934/jimo.2006.2.319

[17]

Dan Li, Li-Ping Pang, Fang-Fang Guo, Zun-Quan Xia. An alternating linearization method with inexact data for bilevel nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2014, 10 (3) : 859-869. doi: 10.3934/jimo.2014.10.859

[18]

Jueyou Li, Guoquan Li, Zhiyou Wu, Changzhi Wu, Xiangyu Wang, Jae-Myung Lee, Kwang-Hyo Jung. Incremental gradient-free method for nonsmooth distributed optimization. Journal of Industrial & Management Optimization, 2017, 13 (4) : 1841-1857. doi: 10.3934/jimo.2017021

[19]

H. T. Banks, R. A. Everett, Neha Murad, R. D. White, J. E. Banks, Bodil N. Cass, Jay A. Rosenheim. Optimal design for dynamical modeling of pest populations. Mathematical Biosciences & Engineering, 2018, 15 (4) : 993-1010. doi: 10.3934/mbe.2018044

[20]

K.F.C. Yiu, K.L. Mak, K. L. Teo. Airfoil design via optimal control theory. Journal of Industrial & Management Optimization, 2005, 1 (1) : 133-148. doi: 10.3934/jimo.2005.1.133

2018 Impact Factor: 1.469

Metrics

  • PDF downloads (5)
  • HTML views (0)
  • Cited by (2)

Other articles
by authors

[Back to Top]