\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

A Max-Min clustering method for $k$-means algorithm of data clustering

Abstract Related Papers Cited by
  • As it is known that the performance of the $k$-means algorithm for data clustering largely depends on the choice of the Max-Min centers, and the algorithm generally uses random procedures to get them. In order to improve the efficiency of the $k$-means algorithm, a good selection method of clustering starting centers is proposed in this paper. The proposed algorithm determines a Max-Min scale for each cluster of patterns, and calculate Max-Min clustering centers according to the norm of the points. Experiments results show that the proposed algorithm provides good performance of clustering.
    Mathematics Subject Classification: Primary: 68T10; Secondary: 90C27.

    Citation:

    \begin{equation} \\ \end{equation}
  • [1]

    V. S. Ananthanarayana, M. Narasimha Murty and D. K. Subramanian, Rapid and brief communication efficient clustering of large data sets, Pattern Recognition, 34 (2001), 2561-2563.

    [2]

    Sanghamitra Bandyopadhyay and Ujjwal Maulik, An evolutionary technique based on K-means algorithm for optimal clustering in $R^N$, Information Sciences, 146 (2002), 221-237.

    [3]

    Bjarni Bodvarsson, M. Morkebjerg, L. K. Hansen, G. M. Knudsen and C. Svarer, Extraction of time activity curves from positron emission tomography: K-means clustering or non-negative matrix factorization, NeuroImage, 31 (2006), 185-186.

    [4]

    Paul S. Bradley and Usama M. Fayyad, Refining max-min points for K-means clustering, in "Proc. 15th International Conf. on Machine Learning," Morgan Kaufmann, San Francisco, CA, (1998), 91-99.

    [5]

    P. S. Bradley, O. L. Mangasarian and W. N. Street, Clustering via concave minimization, in "Advances in Neural Information Systems," (eds. M. C. Mozer, M. I. Jordan and T. Petsche), MIT Press, Cambridge, MA, (1996), 368-374.

    [6]

    R. O. Duda, P. E. Hart and D. G. Stork, "Pattern Classification," second edition, Wiley-Interscience, New York, 2001.

    [7]

    David J. Hand and Wojtek J. Krzanowski, Optimising k-means clustering results with standard software packages, Computational Statistics & Data Analysis, 49 (2005), 969-973.

    [8]

    A. K. Jain and R. C. Dubes, "Algorithms for Clustering Data," Prentice Hall Advanced Reference Series, Prentice-Hall, Englewood Cliffs, NJ, 1988.

    [9]

    Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman and Angela Y. Wu, A local search approximation algorithm for k-means clustering, Computational Geometry, 28 (2004), 89-112.

    [10]

    Shehroz S. Khan and Amir Ahmad, Cluster center max-minization algorithm for K-means clustering, Pattern Recognition Letters, 25 (2004), 1293-1302.doi: 10.1016/j.patrec.2004.04.007.

    [11]

    R. J. Kuo, H. S. Wang, Tung-Lai Hu and S. H. Chou, Application of ant K-means on clustering analysis, Computers & Mathematics with Applications, 50 (2005), 1709-1724.

    [12]

    Youssef M. Marzouk and Ahmed F. Ghoniem, K-means clustering for optimal partitioning and dynamic load balancing of parallel hierarchical N-body simulations, Journal of Computational Physics, 207 (2005), 493-528.

    [13]

    Boris Mirkin, Clustering algorithms: A review, in "Mathematical Classification and Clustering," Chapter 3, Kluwer Academic Publishers, (1996), 109-169.

    [14]

    Boris Mirkin, K-means clustering, in "Clustering for Data Mining," Chapter 3, Taylor & Francis Group, (2005), 75-110.

    [15]

    Boris Mirkin, Concept learning and feature selection based on square-error clustering, Machine Learning, 35 (1999), 25-39.

    [16]

    D. J. Newman, S. Hettich, C. L. Blake and C. J. Merz, "UCI Repository of Machine Learning Databases," University of California, Department of Information and Computer Science, Irvine, CA, 1998. Available from: http://www.ics.uci.edu/~mlearn/MLRepository.html.

    [17]

    Makoto Otsubo, Katsushi Sato and Atsushi Yamaji, Computerized identification of stress tensors determined from heterogeneous fault-slip data by combining the multiple inverse method and k-means clustering, Journal of Structural Geology, 28 (2006), 991-997.

    [18]

    Georg Peters, Some refinements of rough k-means clustering, Pattern Recognition, 39 (2006), 1481-1491.

    [19]

    S. Z. Selim and M. A. Ismail, K-means type algorithms: A generalized convergence theorem and characterization of local optimality, IEEE Trans. Pattern Anal. Mach. Inteli, 6 (1984), 81-87.

    [20]

    Y. Yuan, J. Yan and C. Xu, Polynomial Smooth Support Vector Machine(PSSVM), Chinese Journal Of Computers, 28 (2005), 9-17.

    [21]

    Y. Yuan and T. Huang, A Polynomial Smooth Support Vector Machine for Classification, Lecture Note in Artificial Intelligence, 3584 (2005), 157-164.

    [22]

    Y. Yuan, W. G. Fan and D. M. Pu, Spline function smooth support vector machine for classification, Journal of Industrial Management and Optimization, 3 (2007), 529-542.

  • 加载中
SHARE

Article Metrics

HTML views() PDF downloads(130) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return