\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

A new semi-supervised classifier based on maximum vector-angular margin

  • * Corresponding author: Liming Yang

    * Corresponding author: Liming Yang 
Abstract / Introduction Full Text(HTML) Figure(4) / Table(3) Related Papers Cited by
  • Semi-supervised learning is an attractive method in classification problems when insufficient training information is available. In this investigation, a new semi-supervised classifier is proposed based on the concept of maximum vector-angular margin, (called S$^3$MAMC), the main goal of which is to find an optimal vector $c$ as close as possible to the center of the dataset consisting of both labeled samples and unlabeled samples. This makes S$^3$MAMC better generalization with smaller VC (Vapnik-Chervonenkis) dimension. However, S$^3$MAMC formulation is a non-convex model and therefore it is difficult to solve. Following that we present two optimization algorithms, mixed integer quadratic program (MIQP) and DC (difference of convex functions) program algorithms, to solve the S$^3$MAMC. Compared with the supervised learning methods, numerical experiments on real and synthetic databases demonstrate that the S$^3$MAMC can improve generalization when the labelled samples are relatively few. In addition, the S$^3$MAMC has competitive experiment results in generalization compared to the traditional semi-supervised classification methods.

    Mathematics Subject Classification: Primary: 65K05, 90C26; Secondary: 90C11, 78M50.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  ACC versus µ=1 for ν=1 and 10 on Thyroid data

    Figure 2.  Training samples of the synthetic data

    Figure 3.  Comparison of DCA-S3MAMC and MIQP-S3MAMC in terms of CPU-time

    Figure 4.  Comparison of DCA-S3MAMC and MIQP-S3MAMC in terms of ACC

    Table 1.  Comparison of S$^3$MAMC, MAMC and $\nu$-SVC with the ratio of labelled to unlabelled samples being 2:8 in terms of generalization

    data Classification models G-ACC (%) ACC (%) MCC (%) $F_1$-measure (%)
    DCA-S$^3$MAMC 100 100 100 100
    MIQP-S$^3$MAMC 100 100 100 100
    Wine MAMC 94.31 94.39 89.06 94.16
    $(107 \times 13)$ $\nu$-SVM 99.30 99.30 98.62 99.31
    DCA-S$^3$MAMC 99.81 99.81 99.63 99.81
    MIQP-S$^3$MAMC 95.55 95.65 91.64 95.45
    Tryroid MAMC 94.87 95.00 90.45 95.24
    $(65 \times 5)$ $\nu$-SVM 87.94 88.34 77.76 89.23
    DCA-S$^3$MAMC 92.69 93.76 86.73 93.49
    MIQP-S$^3$MAMC 96.35 96.35 91.20 96.16
    Cancer MAMC 92.26 93.36 86.03 93.02
    $(569 \times 30)$ $\nu$-SVM 91.27 91.46 85.30 90.93
    DCA-S$^3$MAMC 62.40 62.40 24.81 62.65
    MIQP-S$^3$MAMC 62.42 63.02 26.44 65.98
    Sonar MAMC 60.11 60.41 20.97 62.67
    $(208 \times 60)$ $\nu$-SVM 60.11 60.19 20.40 58.99
    DCA-S$^3$MAMC 86.65 87.61 75.00 87.98
    MIQP-S$^3$MAMC 85.34 86.69 75.91 88.20
    Ionosphere MAMC 80.94 81.25 63.14 82.49
    $(350 \times 34)$ $\nu$-SVM 86.34 86.74 74.51 87.76
    DCA-S$^3$MAMC 72.82 73.66 48.51 76.28
    MIQP-S$^3$MAMC 78.26 78.65 58.00 80.19
    Hepatitis MAMC 70.60 71.76 45.04 74.98
    $(155 \times 19)$ $\nu$-SVM 70.00 71.21 43.93 74.53
    DCA-S$^3$MAMC 86.39 86.40 72.80 86.54
    MIQP-S$^3$MAMC 84.72 84.79 69.76 85.31
    Heart MAMC 81.82 81.86 63.83 82.35
    $(155 \times 19)$ $\nu$-SVM 84.70 84.72 69.46 84.95
    DCA-S$^3$MAMC 94.78 94.81 89.69 94.91
    MIQP-S$^3$MAMC 95.61 95.62 91.29 95.69
    Vote MAMC 90.11 90.29 81.07 90.80
    $(432 \times 16)$ $\nu$-SVM 94.51 94.54 89.09 94.60
    DCA-S$^3$MAMC 92.50 92.50 85.00 92.54
    MIQP-S$^3$MAMC 93.25 93.25 86.50 93.23
    Synthesis MAMC 86.45 86.49 73.07 86.82
    $(200 \times 2)$ $\nu$-SVM 82.50 82.50 65.00 82.41
     | Show Table
    DownLoad: CSV

    Table 2.  Comparison of S$^3$MAMC, MAMC and $\nu$-SVM with the ratio of labelled to unlabelled samples being 1:9 in terms of accuracy (ACC)

    models DCA-S$^3$MAMC $(\%)$ MAMC $(\%)$ $\nu$-SVM $(\%)$
    Tryroid 92.59 85.19 86.29
    Ionosphere 83.37 71.43 71.74
    Sonar 60.45 55.56 53.89
    Cancer 93.15 89.88 84.52
    Heart 85.19 74.31 75.49
    Hepatitis 73.33 64.44 71.11
    Vote 93.65 86.95 89.68
    Synthesis 91.56 70.39 63.66
     | Show Table
    DownLoad: CSV

    Table 3.  Comparisons of the S$^3$MAMC with other semi-supervised learning methods by accuracy (ACC)

    models MIQP-S$^3$MAMC $(\%)$ DCA-S$^3$MAMC $(\%)$ MILP-S$^3$VM $(\%)$ VS$^3$VM $(\%)$
    Ionosphere 86.69 87.61 89.40 87.36
    Sonar 63.02 62.40 78.10 66.12
    Cancer 96.35 93.76 96.60 97.46
    Heart 84.79 86.40 84.00 84.70
    Hepatitis 78.65 73.66 70.36 65.13
    Synthesis 93.25 92.50 81.11 85.67
     | Show Table
    DownLoad: CSV
  • [1] L. T. H. An and P. D. Tao, The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems, Annals of Operations Research, 133 (2005), 23-46.  doi: 10.1007/s10479-004-5022-1.
    [2] L. T. H. AnH. M. LeV. V. Nguyen and P. D. Tao, A DC programming approach for feature selection in support vector machines learning, Advances in Data Analysis and Classification, 2 (2008), 259-278.  doi: 10.1007/s11634-008-0030-7.
    [3] A. Asuncion and D. J. Newman, UCI machine learning repository, School of Information and Computer Sciences, University of California Irvine, 2007, http://www.ics.uci.edu/~mlearn/MLRepository.html.
    [4] K. Bennett and A. Demiriz, Semi-supervised support vector machines, In Advances in Neural Information Processing Systems, MIT Press, Cambridge, 12 (1998), 368–374.
    [5] W. ChangzhiL. Chaojie and L. Qiang, A DC programming approach for sensor network localization with uncertainties in anchor positions, Journal of Industrial and Management Optimization, 10 (2014), 817-826.  doi: 10.3934/jimo.2014.10.817.
    [6] O. ChapelleV. Sindhwani and S. Keerthi, Optimization Techniques for Semi-Supervised Support Vector Machines, Journal of Machine Learning Research, 9 (2008), 203-233. 
    [7] T. Fawcett, An introduction to ROC analysis, Pattern Recognition Letters, 27 (2006), 861-874. 
    [8] G. Fung and O. Mangasarian, Semi-Supervised Support Vector Machines for Unlabeled Data Classification, Optimization methods & software, 15 (2001), 29-44. 
    [9] W. Guan and A. Gray, Sparse high-dimensional fractional-norm support vector machine via DC programming, Computational Statistics and Data Analysis, 67 (2013), 136-148.  doi: 10.1016/j.csda.2013.01.020.
    [10] W. J. HuF. L. Chung and L. SH. Wang, The Maximum Vector-Angular Margin Classifier and its fast training on large datasets using a core vector machine, Neural Networks, 27 (2012), 60-73. 
    [11] P. D. Tao and L. T. T. An, Convex analysis approaches to DC programming: Theory, algorithms and applications, Acta Mathematica, 22 (1997), 287-367. 
    [12] B. ScholkopfA. J. SmolaR. C. Williamson and P. L. Bartlett, New support vector algorithms, Neural Computation, 12 (2000), 1207-1245. 
    [13] X. XiaoJ. GuL. Zhang and S. Zhang, A sequential convex program method to DC program with joint chance constraints, Journal of Industrial and Management Optimization, 8 (2012), 733-747.  doi: 10.3934/jimo.2012.8.733.
    [14] L. M. Yang and L. SH. Wang, A class of smooth semi-supervised SVM by difference of convex functions programming and algorithm, Knowledge-Based Systems, 41 (2013), 1-7. 
    [15] YALMIP Toolbox. http://control.ee.ethz.ch/~joloef/wiki/pmwiki.php.
    [16] Y. B. Yuan, Canonical duality solution for alternating support vector machine, Journal of Industrial and Management Optimization, 8 (2012), 611-621.  doi: 10.3934/jimo.2012.8.611.
    [17] V. N. Vapnik, Statistical Learning Theory, New York: Wiley. 1998.
  • 加载中

Figures(4)

Tables(3)

SHARE

Article Metrics

HTML views(1788) PDF downloads(171) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return