A SOFT SUBSPACE CLUSTERING ALGORITHM WITH LOG-TRANSFORMED DISTANCES

Entropy weighting used in some soft subspace clustering algorithms is sensitive to the scaling parameter. In this paper, we propose a novel soft subspace clustering algorithm by using log-transformed distances in the objective function. The proposed algorithm allows users to choose a value of the scaling parameter easily because the entropy weighting in the proposed algorithm is less sensitive to the scaling parameter. In addition, the proposed algorithm is less sensitive to noises because a point far away from its cluster center is given a small weight in the cluster center calculation. Experiments on both synthetic datasets and real datasets are used to demonstrate the performance of the proposed algorithm.

1. Introduction.In data clustering or cluster analysis, the goal is to divide a set of objects into homogeneous groups called clusters [10,18,20,26,12,1].For high-dimensional data, clusters are usually formed in subspaces of the original data space and different clusters may relate to different subspaces.To recover clusters embedded in subspaces, subspace clustering algorithms have been developed, see for example [2,15,19,17,9,21,16,22,3,25,7,11,13].Subspace clustering algorithms can be classified into two categories: hard subspace clustering algorithms and soft subspace clustering algorithms.
In hard subspace clustering algorithms, the subspaces in which clusters embed are determined exactly.In other words, each attribute of the data is either associated with a cluster or not associated with the cluster.For example, the subspace clustering algorithms developed in [2] and [15] are hard subspace clustering algorithms.In soft subspace clustering algorithms, the subspaces of clusters are not determined exactly.Each attribute is associated to a cluster with some probability.If an attribute is important to the formation of a cluster, then the attribute is associated to the cluster with high probability.Examples of soft subspace clustering algorithms include [19], [9], [21], [16], and [13].
In soft subspace clustering algorithms, the attribute weights associated with clusters are automatically determined.In general, the weight of an attribute for a cluster is inversely proportional to the dispersion of the attribute in the cluster.If

GUOJUN GAN AND KUN CHEN
the values of an attribute in a cluster is relatively compact, then the attribute will be assigned a relatively high value.In the FSC algorithm [16], for example, the attribute weights are calculated as where is a small positive number used to prevent dividing by zero, α > 1 is a parameter used to control the smoothness of the attribute weights, and Here k is the number of clusters, d is the number of attributes, and z l is the center of the lth cluster C l .In the EWKM algorithm [21], the attribute weights are calculated as where γ > 0 is a parameter used to control the smoothness of the attribute weights.One drawback of the FSC algorithm is that a positive value of is required in order to prevent dividing by zero when an attribute has identical values in a cluster.Using the entropy weighting, the EWKM algorithm does not suffer from the problem of dividing by zero.However, the attribute weights calculated in the EWKM algorithm are sensitive to the parameter γ when the range of the attribute dispersions (e.g., V lj ) in a cluster is large.For example, suppose that a dataset has two attributes, whose dispersions in a cluster are 10 and 30, respectively.If we use a small value of γ such as γ = 1, the attribute weights will be 1 + e 2 = 0.12.From the above example we see that choosing an appropriate value for the parameter γ is a difficult task when the attribute dispersions in a cluster is large.Feature group weighting has been introduced to address the issue [7,14].
In this paper, we address the issue from a different perspective.Unlike the group feature weighting approach, the approach we employ in this paper involves using the log transformation to transform the distances so that the attribute weights are not dominated by a single attribute with the smallest dispersion.In particular, we present a soft subspace clustering algorithm called the LEKM algorithm (logtransformed entropy weighting k-means) to address the aforementioned problem.The LEKM algorithm extends the EWKM algorithm by using log-transformed distances in its objective function.The resulting attribute dispersions in a cluster are more compact than those from the EWKM algorithm.Due to the small difference of the attribute dispersions, the LEKM algorithm is less sensitive to the parameter than other soft subspace clustering algorithms are.
The remaining part of this paper is structured as follows.In Section 2, we give a brief review of the LAC algorithm [9] and the EWKM algorithm [21].In Section 3, we present the LEKM algorithm in detail.In Section 4, we present numerical experiments to demonstrate the performance of the LEKM algorithm.Section 5 concludes the paper with some remarks.
2. Related work.In this section, we introduce the EWKM algorithm [21] and the LAC algorithm [9], which are soft subspace clustering algorithms using the entropy weighting.
2.1.The EWKM algorithm.Let x 1 , x 2 , . . ., x n be n data points, each of which is described by d attributes.Let k be the desired number of clusters.Then the objective function of the EWKM algorithm is defined as follows [21]: where γ > 0 is a parameter, U = (u il ) n×k is a n × k partition matrix, and W = (w lj ) k×d is a k × d weight matrix.In addition, the partition matrix U and the weight matrix W satisfy the following conditions: and w lj > 0, l = 1, 2, . . ., k, j = 1, 2, . . ., d. (5d) Like the k-means algorithm [23,4], the EWKM algorithm tries to minimize the objective function using an iterative process.At the beginning, the EWKM algorithm initializes the cluster centers by selecting k points from the dataset randomly and initializes the attribute weights with equal values.Then the EWKM algorithm keeps updating U , W , and Z one at a time by fixing the other two.Given W and Z, the partition matrix U is updated as for i = 1, 2, . . ., n and l = 1, 2, . . ., k.Given U and Z, the weight matrix W is updated as for l = 1, 2, . . ., k and j = 1, 2, . . ., d, where Given U and W , the cluster centers are updated as The parameter γ in the EWKM algorithm is used to control the smoothness of the attribute weights.If γ approaches to infinity, then all attributes have the same weights.In such cases, the EWKM algorithm becomes the standard k-means algorithm.Since the attribute weights are based on exponential normalization, the weights are sensitive to the parameter γ when the attribute dispersions (e.g., V lj ) have a wide range.
2.2.The LAC algorithm.The LAC algorithm (Locally Adaptive Clustering) [9] and the EWKM algorithm are similar soft subspace clustering algorithms in that both algorithms discover subspace clusters via exponential weighting of attributes.However, the LAC algorithm differs from the EWKM algorithm in the definition of objective function.Clusters found by the LAC algorithm are referred to as weighted clusters.The objective function of the LAC algorithm is defined as where k is the number of clusters, d is the number of attributes, Z = {z 1 , z 2 , . . ., z k } is a set of cluster centers, W = (w lj ) k×d is a weight matrix, C = {C 1 , C 2 , . . ., C k } is a set of clusters, and h > 0 is a parameter.The weight matrix also satisfies the conditions given in Equations (5c) and (5d).
Like the k-means algorithm and the EWKM algorithm, the LAC algorithm also employs an iterative process to optimize the objective function.Similar to the EWKM algorithm, the LAC algorithm initializes the cluster centers by selecting k points from the dataset randomly and initializes the attribute weights with equal values.Given the set of cluster centers Z and the set of weight vectors W , the clusters are determined as follows: for l = 1, 2, . . ., k.Given the set of cluster centers Z and the set of clusters {S 1 , S 2 , . . ., S k }, the set of weight vector is determined as follows: for l = 1, 2, . . ., k and j = 1, 2, . . ., d, where Given the set of clusters {S 1 , S 2 , . . ., S k }, the cluster centers are updated as follows: for l = 1, 2, . . ., k and j = 1, 2, . . ., d.The runtime complexity of one iteration of the LAC algorithm is O(nkd).
Comparing Equation (6) with Equation ( 4), we see that the distances in the objective function of the LAC algorithm are normalized by the sizes of the corresponding clusters.As a result, the dispersions (i.e., V lj ) calculated in the LAC algorithm are smaller than those calculated in the EWKM algorithm.However, the dispersions calculated in the LAC algorithm can still have a wide range for small-sample high-dimensional data such as gene expression data [8].
3. The LEKM algorithm.In this section, we present the LEKM algorithm.The LEKM algorithm is similar to the EWKM algorithm [21] and the LAC algorithm [9] in that the entropy weighting is used to determine the attribute weights.
Let X = {x 1 , x 2 , . . ., x n } be a dataset containing n points, each of which is described by d numerical features or attributes.Let Z = {z 1 , z 2 , . . ., z k } be a set of cluster centers, where k is the number of clusters.Then the objective function of the LEKM algorithm is defined as where U = (u il ) n×k is a n × k binary matrix satisfying Equations (5a) and (5b), W = (w lj ) k×d is a k×d satisfying Equations (5c) and (5d), and λ > 0 is a parameter.
In the above equation, x ij and z lj denote the values of x i and z l in the jth attribute, respectively.The matrix U is the partition matrix in the following sense.If u il = 1, then the point x i belongs to the lth cluster.The matrix W is the weight matrix containing the attribute weights.If w lj is relatively large, then the jth attribute is important for the formulation of the lth cluster.Similar to the EWKM algorithm, the LEKM algorithm tries to minimize the objective function given in Equation (10) iteratively by finding the optimal value of U , W , and Z according to the following theorems.
Theorem 3.1.Let W and Z be fixed.Then the partition matrix U that minimizes the objective function P (U, W, Z) is given by for i = 1, 2, . . ., n and l = 1, 2, . . ., k, where w lj ln w lj .
Proof.Since W and Z are fixed and the rows of the partition matrix U are independent of each other, the objective function is minimized if for each i = 1, 2, . . ., n, the following function is minimized.Note that u il ∈ {0, 1} and The function defined in Equation ( 12) is minimized if Equation ( 11) holds.This completes the proof.
Theorem 3.2.Let U and Z be fixed.Then the weight matrix W that minimizes the objective function P (U, W, Z) is given by for l = 1, 2, . . ., k and j = 1, 2, . . ., d, where Proof.The weight matrix W that minimizes the objective function P (U, W, Z) subject to is the matrix W that minimizes the following function The weight matrix W that minimizes Equation ( 14) satisfies the following equations From Equation (13) we see that the attribute weights of the lth cluster are the exponential normalizations of V l1 , V l2 , . .., V ld .Since V lj is the sum of logtransformed distances, the range of the magnitudes of V l1 , V l2 , . .., V ld is small.Hence the weights are less sensitive to the parameter λ.Theorem 3.3.Let U and W be fixed.Then the set of cluster centers Z that minimizes the objective function P (U, W, Z) satisfies the following nonlinear equations (15) for l = 1, 2, . . ., k and j = 1, 2, . . ., d.
Proof.If the set of cluster centers Z minimizes the objective function P (U, W, Z), then for all l = 1, 2, . . ., k and j = 1, 2, . . ., d, the derivative of P (U, W, Z) with respect to w lj is equal to zeros.In other words, we have Since w lj > 0, we have from which Equation ( 15) follows.
In the standard k-means algorithm, the EWKM algorithm, and the LAC algorithm, the center of a cluster is calculated as the average of the points in the cluster.In the LEKM algorithm, however, the center of a cluster is governed by a nonlinear equation in such a way that the center is a weighted average of the points in the cluster.In addition, if a point is far away from its center, then the point is given a low weight in the center calculation.As a result, the LEKM algorithm is less sensitive to outliers than the EWKM algorithm and the LAC algorithm.Since the LEKM algorithm is an iterative algorithm, we can in practice update the cluster centers as follows: for l = 1, 2, . . ., k and j = 1, 2, . . ., d, where Z * = {z * 1 , z * 2 , . . ., z * k } is the set of cluster centers from the previous iteration.When the algorithm converges, the cluster centers in the current iteration are the same as those from the previous iteration and Equation (16) is the same as Equation (15).
To find the optimal values of U , W , and Z that minimize the objective function given in Equation (10), the LEKM algorithm proceeds iteratively by updating one of U , W , and Z at a time with other other two fixed.The pseudo-code of the LEKM algorithm is shown in Algorithm 1.The computational complexity of one iteration of the LEKM algorithm is O(nkd).Although the runtime complexity of the LEKM algorithm is the same as those of the EWKM algorithm and the LAC algorithm, we expect the LEKM algorithm to be slower than the EWKM algorithm and the LAC algorithm as more operations are involved in the LEKM algorithm.
The LEKM algorithm requires four parameters: k, λ, δ, and N max .The parameter k is the desired number of clusters.The parameter λ controls the smoothness of the attribute weights.The larger the value of λ, the more uniform of the attribute weights.The last two parameters are used to terminate the algorithm.Table 1 gives some default values of some parameters.
4. Numerical experiments.In this section, we present numerical experiments based on both synthetic data and real data to demonstrate the performance of the LEKM algorithm.We also compare the LEKM algorithm with the EWKM algorithm and the LAC algorithm in terms of accuracy and runtime.We implemented all three algorithms in Java and used the same convergence criterion as shown in Algorithm 1.
Algorithm 1: Pseudo-code of the LEKM Algorithm.
Input: X, k, λ, δ, N max Output: Optimal values of U , W , and Z 1 Initialize W (0) with equal values (i.e., set w lj = 1/d); 2 Initialize Z (0) by selecting k points from X randomly; 3 Update U (0) according to Theorem 3.1; 4 s ← 0; 5 P (0) ← 0; In our experiments, we use the corrected index [8,13] to measure the accuracy of clustering results.The corrected Rand index is calculated from two partitions of the same dataset and its value ranges from -1 to 1, with 1 indicating perfect agreement between the two partitions and 0 indicating agreement by chance.In general, the higher the corrected Rand index, the better the clustering result.
Since the all the three algorithms are k-means-type algorithms, they are sensitive to initial cluster centers [6,13].To compare the performance of these three algorithms on the first synthetic dataset, we run these algorithm 100 times and calculate the average accuracy and runtime.In each run, we use a different seed to select random initial cluster centers.To compare the three algorithms in a consistent way, we used the same 100 seeds for all three algorithms.To test the impact of the parameters (i.e., γ in EWKM, h in LAC, and λ in LEKM), we use five different values for the parameter: 1, 2, 4, 8, and 16.

4.1.
Experiments on synthetic data.To test the performance of the LEKM algorithm, we generated two synthetic datasets.The first synthetic dataset is a 2-dimensional dataset with two clusters and is shown in Figure 1.From the figure we see that the cluster in the top is compact but the cluster in the bottom contains several points that are far away from the cluster center.We can consider this dataset as a dataset containing noises.Table 2 shows the average corrected Rand index of 100 runs of the three algorithms on the first synthetic dataset.From the table we see that the LEKM algorithm produced more accurate results than the LAC algorithm and the EWKM algorithm.The EWKM produced the least accurate results.Since the dispersion of an attribute in a cluster is normalized by the size of the cluster in the LAC  Table 3 shows the confusion matrices produced by the best run of the three algorithms on the first synthetic dataset.We run the EWKM algorithm, the LAC algorithm, and the LEKM algorithm 100 times on the first synthetic dataset with parameter 2 (i.e., γ = 2 in EWKM, h = 2 in LAC, and λ = 2 in LEKM) and chose the best run to be the run with the lowest objective function value.From Table 3 we see that the LEKM algorithm was able to recover the two clusters from the first synthetic dataset correctly.The LAC algorithm clustered one point incorrectly.The EWKM algorithm is sensitive to noises and clustered many points incorrectly.Table 4 shows the attribute weights of the two clusters produced by the best runs of the three algorithms.As we can see from the table that the attribute weights produced by the EWKM algorithm are dominated by one attribute.The attribute weights of one cluster produced by the LAC algorithm is also affected by the noises in the cluster.The attribute weights of the clusters produced by the LEKM algorithm seem reasonable as the two clusters are formed in the full space and approximate the same attribute weights are expected.
Table 5 shows the average runtime of the 100 runs of the three algorithms on the first synthetic dataset.From the table we see that the EWKM algorithm converged the fastest.The LAC algorithm and the LEKM algorithm converged in about the same time.
The second synthetic dataset is a 100-dimensional dataset with four clusters.Table 6 shows the sizes and dimensions of the four clusters.This dataset was also used to test the SAP algorithm developed in [13].7. The average accuracy of 100 runs of the three algorithms on the second synthetic dataset.
Table 8 shows the confusion matrices produced by the runs of the three algorithms with the lowest objective function value.From the table we see that only three points were clustered incorrectly by the LEKM algorithm.Many points were clustered incorrectly by the EWKM algorithm and the LAC algorithm.Figures 2, 3, and 4 plot the attribute weights of the four clusters corresponding to the confusion matrices given in Table 8.From Figures 2 and 3 we can see that the attribute weights were dominated by a single attribute.Figure 4 shows that the LEKM algorithm was able to recover all the subspace dimensions correctly.
Table 9 shows the average runtime of 100 runs of the three algorithms on the second synthetic dataset.From the table we see that the LEKM algorithm is slower than the other two algorithms.Since the center calculation of the LEKM algorithm  q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0 20 40 60 80 100 0.0 0.2 0.4 0.6 0. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0 20 40 60 80 100 0.0 0.2 0.4 0.6 0.8  is more complicate than that of the EWKM algorithm and the LAC algorithm, it is expected that the LEKM algorithm is slower than the other two algorithms.In summary, the test results on synthetic datasets have shown that the LEKM algorithm is able to recover clusters from noise data and recover clusters embedded in subspaces.The test results also show that the LEKM algorithm is less sensitive to noises and parameter values that the EWKM algorithm and the LEKM algorithm.However, the LEKM algorithm is in general slower than the other two algorithm due to its complex center calculation.106 GUOJUN GAN AND KUN CHEN q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0 20 40 60 80 100

4.2.
Experiments on real data.To test the algorithms on real data, we obtained two cancer gene expression datasets from [8] 1 .The first dataset contains gene expression data of human liver cancers and the second dataset contains gene expression data of breast tumors and colon tumors.Table 10 shows the information of the two real datasets.The two datasets have known labels, which tell the type of sample of each data point.The two datasets were also used to test the SAP algorithm in [13].

Dataset
Samples Dimensions Cluster sizes   The average accuracy and runtime of 100 runs of the three algorithms on the Chowdary-2006 dataset are shown in Table 13 and Table 14, respectively.From Table 13 we see than the LEKM algorithm again produced more accurate clustering results than the other two algorithm did.When the parameter was set to be 1, the LAC produced better results than the EWKM algorithm did.For other cases, however, the EWKM algorithm produced better results than the LAC algorithm did.The LAC algorithm and the EWKM algorithm are much faster than the LEKM algorithm as shown in Table 14.
In summary, the test results on real datasets show that the LEKM algorithm produced more accurate clustering results on average than the EWKM algorithm and the LAC algorithm did.However, the LEKM algorithm was slower than the other two algorithms.
5. Concluding remarks.The EWKM algorithm [21] and the LAC algorithm [9] are two soft subspace clustering algorithms that are similar to each other.In both algorithms, the attribute weights of a cluster are calculated as exponential normalizations of the negative attribute dispersions in the cluster scaled by a parameter.Setting the parameter is a challenge when the attribute dispersions in a cluster have a large range.In this paper, we proposed the LEKM (log-transformed entropy weighting k-means) algorithm by using log-transformed distances in the objective function so that the attribute dispersions in a cluster are smaller than those in the EWKM algorithm and the LAC algorithm.The proposed LEKM algorithm has the following two properties: first, the LEKM algorithm allows users to choose a value for the parameter easily because the attribute dispersions in a cluster have a small range; second, the LEKM algorithm is less sensitive to noises because data points far away from they corresponding cluster centers are given small weights in the cluster center calculation.
We tested the performance of the LEKM algorithm and compared it with the EWKM algorithm and the LAC algorithm.The test results on both synthetic datasets and real datasets have shown that the LEKM algorithm is able to outperform the EWKM algorithm and the LAC algorithm in terms of accuracy.However, one limitation of the LEKM algorithm is that it is slower than the other two algorithm because updating the cluster centers in each iteration in the LEKM algorithm is more complicate than that in the other two algorithms.
Another limitation of the LEKM algorithm is that it is sensitive to initial cluster centers.This limitation is common to most of the k-means-type algorithms, which include the EWKM algorithm and the LAC algorithm.Other efficient cluster center initialization methods [24,5,6] can be used to improve the performance of the kmeans-type algorithms including the LEKM algorithm.

Table 3 .
The confusion matrices of the first synthetic dataset correspond to the runs with the lowest objective function values.The parameter used in these runs is 2. The labels "1" and "2" in the first row indicate the given clusters.The labels "C1" and "C2" in the first column indicate the found clusters.(a) EWKM.(b) LAC.(c) LEKM.and LEKM algorithms, the LAC and LEKM algorithms are less sensitive to the parameter.

Table 4 .
The attribute weights of the two clusters correspond to the runs with the lowest objective function values.The parameter used in these runs is 2. The labels "C1" and "C2" in the first column indicate the found clusters.(a) EWKM.(b) LAC.(c) LEKM.

Figure 2 .
Figure 2. Attribute weights of the four clusters produced by the EWKM algorithm.

Figure 3 .
Figure 3. Attribute weights of the four clusters produced by the LAC algorithm.

Figure 4 .
Figure 4. Attribute weights of the four clusters produced by the LEKM algorithm.

Table 2 .
The average accuracy of 100 runs of the three algorithms on the first synthetic dataset.The numbers in parenthesis are the corresponding standard deviations over the 100 runs.The parameter refers to γ, h, and λ in EWKM, LAC, and LEKM, respectively.
Figure 1.A 2-dimensional dataset with two clusters.

Table 5 .
The average runtime of the three algorithms on the first synthetic dataset.The numbers in parenthesis are the corresponding standard deviations over the 100 runs.The numbers are in seconds.

Table 6 .
A 100-dimensional dataset with 4 subspace clusters.results of the three algorithms.From the table we see that the LEKM algorithm produced the most accurate results when the parameter is small.When the parameter is large, the attribute weights calculated by the LEKM algorithm become approximately the same.Since the clusters are embedded in subspaces, assigning approximately the same weight to attributes prevents the LEKM algorithm from recovering these clusters.

Table 8 .
Confusion matrices of the second synthetic dataset produced by the runs with the lowest objective function values.In these runs, the parameter was set to 2. (a) EWKM.(b) LAC.(c) LEKM.

Table 9 .
The average runtime of 100 runs of the three algorithms on the second synthetic dataset.

Table 10 .
Two real gene expression datasets.

Table 11 and
Table 12summarize the average accuracy and the average runtime of 100 runs of the three algorithms on the Chen-2002 dataset, respectively.From the average corrected Rand index shown in Table11we see that the LEKM algorithm produced more accurate results than the EWKM algorithm and the LAC algorithm did.However, the LEKM algorithm was slower than the other two algorithm.

Table 11 .
The average accuracy of 100 runs of the three algorithms on the Chen-2002 dataset.

Table 12 .
The average runtime of 100 runs of the three algorithms on the Chen-2002 dataset.

Table 13 .
The average accuracy of 100 runs of the three algorithms on the Chowdary-2006 dataset.

Table 14 .
The average runtime of 100 runs of the three algorithms on the Chowdary-2006 dataset.