\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Sparse regularized learning in the reproducing kernel banach spaces with the $ \ell^1 $ norm

  • * Corresponding author: Rongrong Lin

    * Corresponding author: Rongrong Lin 
Abstract / Introduction Full Text(HTML) Figure(2) / Table(2) Related Papers Cited by
  • We present a sparse representer theorem for regularization networks in a reproducing kernel Banach space with the $ \ell^1 $ norm by the theory of convex analysis. The theorem states that extreme points of the solution set of regularization networks in such a sparsity-promoting space belong to the span of kernel functions centered on at most $ n $ adaptive points of the input space, where $ n $ is the number of training data. Under the Lebesgue constant assumptions on reproducing kernels, we can recover the relaxed representer theorem and the exact representer theorem in that space in the literature. Finally, we perform numerical experiments for synthetic data and real-world benchmark data in the reproducing kernel Banach spaces with the $ \ell^1 $ norm and the reproducing kernel Hilbert spaces both with Laplacian kernels. The numerical performance demonstrates the advantages of sparse regularized learning.

    Mathematics Subject Classification: Primary: 46E22, 62G08; Secondary: 68Q32.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Numerical results of models (12) and (13) for Tai Chi data set are illustrated in Figure 1(a) and Figure 1(b), respectively.

    Figure 2.  Numerical results of models (12) and (13) for the second data set are illustrated in Figure 2(a) and Figure 2(b), respectively

    Table 1.  Lebesgue constants of the Laplacian kernel $ e^{-\|x-x'\|_2} $ on a set of $ n $ grid points of $ [-1, 1]^2 $

    n=100 n=400 n=900 n=1600 n=2500
    1.237204 1.244770 1.249653 1.246516 1.246808
     | Show Table
    DownLoad: CSV

    Table 2.  Lebesgue constants of the Laplacian kernel $ e^{-\|x-x'\|_2} $, $ x, x'\in{\mathbb R}^3 $ on a set of $ n $ grid points of $ [-1, 1]^3 $

    n=125 n=1000 n=1728 n=2197 n=3375
    1.624012 1.705158 1.709775 1.711243 1.712867
     | Show Table
    DownLoad: CSV
  • [1] S. BoydN. ParikhE. ChuB. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3 (2011), 1-122.  doi: 10.1561/2200000016.
    [2] C. BoyerA. ChambolleY. De CastroV. DuvalF. de Gournay and P. Weiss, On representer theorems and convex regularization, SIAM J. Optim., 29 (2019), 1260-1281.  doi: 10.1137/18M1200750.
    [3] O. Christensen, An Introduction to Frames and Riesz Bases, 2nd edition, Applied and Numerical Harmonic Analysis, Birkhäuser/Springer, [Cham], 2016. doi: 10.1007/978-3-319-25613-9.
    [4] H. G. Dales, F. K. Dashiell Jr., A. T.-M. Lau and D. Strauss, Banach Spaces of Continuous Functions as Dual Spaces, CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, Springer, Cham, 2016. doi: 10.1007/978-3-319-32349-7.
    [5] S. De Marchi and R. Schaback, Stability of kernel-based interpolation, Adv. Comput. Math., 32 (2010), 155-161.  doi: 10.1007/s10444-008-9093-4.
    [6] D. L. Donoho, For most large underdetermined systems of equations, the minimal $l_1$-norm near-solution approximates the sparsest near-solution, Comm. Pure Appl. Math., 59 (2006), 907-934.  doi: 10.1002/cpa.20131.
    [7] G. Fasshauer and M. McCourt, Kernel-Based Approximation Methods using MATLAB, Interdisciplinary Mathematical Sciences. Vol. 19, World Scientific, 2015.
    [8] Z.-C. Guo and L. Shi, Learning with coefficient-based regularization and $\ell^1$-penalty, Adv. Comput. Math., 39 (2013), 493-510.  doi: 10.1007/s10444-012-9288-6.
    [9] T. HangelbroekF. J. Narcowich and J. D. Ward, Kernel approximation on manifolds I: bounding the Lebesgue constant, SIAM J. Math. Anal., 42 (2010), 1732-1760.  doi: 10.1137/090769570.
    [10] L. Huang, C. Liu, L. Tan and Q. Ye, Generalized representer theorems in Banach spaces, Anal. Appl. (Singap.), (2019). doi: 10.1142/S0219530519410100.
    [11] V. Klee, On a theorem of Dubins, J. Math. Anal. Appl., 7 (1963), 425-427.  doi: 10.1016/0022-247X(63)90063-5.
    [12] Z. Li, Y. Xu and Q. Ye, Sparse support vector machines in reproducing kernel banach spaces, Contemporary Computational Mathematics-A Celebration of the 80th Birthday of Ian Sloan, Springer, Cham, 1 (2018), 869–887.
    [13] R. Lin, G. Song and H. Zhang, Multi-task learning in vector-valued reproducing kernel Banach spaces with the $\ell^1$ norm, https://arXiv.org/abs/1901.01036.
    [14] R. Lin, H. Zhang and J. Zhang, On reproducing kernel Banach spaces: Generic definitions and unified framework of constructions, https://arXiv.org/abs/1901.01002.
    [15] A. Rudi, R. Camoriano and L. Rosasco, Less is more: Nyström computational regularization, in Advances in Neural Information Processing Systems, (2015), 1657–1665.
    [16] K. Schlegel, When is there a representer theorem? Nondifferentiable regularisers and Banach spaces, J. Global Optim., 74 (2019), 401-415.  doi: 10.1007/s10898-019-00767-0.
    [17] B. Schölkopf and  A. J. SmolaLearning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, The MIT Press, Cambridge, 2001. 
    [18] B. SimonConvexity: An Analytic Viewpoint, vol. 187 of Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 2011.  doi: 10.1017/CBO9780511910135.
    [19] G. Song and H. Zhang, Reproducing kernel Banach spaces with the $\ell^1$ norm ii: Error analysis for regularized least square regression, Neural Comput., 23 (2011), 2713-2729.  doi: 10.1162/NECO_a_00178.
    [20] G. SongH. Zhang and F. J. Hickernell, Reproducing kernel Banach spaces with the $\ell^1$ norm, Appl. Comput. Harmon. Anal., 34 (2013), 96-116.  doi: 10.1016/j.acha.2012.03.009.
    [21] B. K. SriperumbudurK. Fukumizu and G. R. G. Lanckriet, Universality, characteristic kernels and RKHS embedding of measures, J. Mach. Learn. Res., 12 (2011), 2389-2410. 
    [22] I. Steinwart and A. Christmann, Support Vector Machines, Information Science and Statistics, Springer, New York, 2008.
    [23] M. UnserJ. Fageot and H. Gupta, Representer theorems for sparsity-promoting $\ell_1$ regularization, IEEE Trans. Inform. Theory, 62 (2016), 5167-5180.  doi: 10.1109/TIT.2016.2590421.
    [24] Y. Xu and Q. Ye, Generalized Mercer kernels and reproducing kernel Banach spaces, Mem. Amer. Math. Soc., 258 (2019), no. 1243,122 pp. doi: 10.1090/memo/1243.
    [25] H. ZhangY. Xu and J. Zhang, Reproducing kernel Banach spaces for machine learning, J. Mach. Learn. Res., 10 (2009), 2741-2775.  doi: 10.1109/IJCNN.2009.5179093.
    [26] H. Zhang and L. Zhao, On the inclusion relation of reproducing kernel Hilbert spaces, Anal. Appl. (Singap.), 11 (2013), 1350014, 31pp. doi: 10.1142/S0219530513500140.
  • 加载中

Figures(2)

Tables(2)

SHARE

Article Metrics

HTML views(4825) PDF downloads(349) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return