Author:
Mishra B. K.,Mohanty Sachi Nandan,Baidyanath R. R.,Ali Shahid,Abduvalieva D.,Awwad Fuad A.,Ismail Emad A. A.,Gupta Manish
Abstract
AbstractClustering is an important tool for data mining since it can determine key patterns without any prior supervisory information. The initial selection of cluster centers plays a key role in the ultimate effect of clustering. More often researchers adopt the random approach for this purpose in an urge to get the centers in no time for speeding up their model. However, by doing this they sacrifice the true essence of subgroup formation and in numerous occasions ends up in achieving malicious clustering. Due to this reason we were inclined towards suggesting a qualitative approach for obtaining the initial cluster centers and also focused on attaining the well-separated clusters. Our initial contributions were an alteration to the classical K-Means algorithm in an attempt to obtain the near-optimal cluster centers. Few fresh approaches were earlier suggested by us namely, far efficient K-means (FEKM), modified center K-means (MCKM) and modified FEKM using Quickhull (MFQ) which resulted in producing the factual centers leading to excellent clusters formation. K-means, which randomly selects the centers, seem to meet its convergence slightly earlier than these methods, which is the latter’s only weakness. An incessant study was continued in this regard to minimize the computational efficiency of our methods and we came up with farthest leap center selection (FLCS). All these methods were thoroughly analyzed by considering the clustering effectiveness, correctness, homogeneity, completeness, complexity and their actual execution time of convergence. For this reason performance indices like Dunn’s Index, Davies–Bouldin’s Index, and silhouette coefficient were used, for correctness Rand measure was used, for homogeneity and completeness V-measure was used. Experimental results on versatile real world datasets, taken from UCI repository, suggested that both FEKM and FLCS obtain well-separated centers while the later converges earlier.
Publisher
Springer Science and Business Media LLC
Reference57 articles.
1. Odell, P. L. & Duran, B. S. Cluster Analysis; A Survey. Lecture Notes in Economics and Mathematical Systems Vol. 100 (LNE, 1974).
2. Na, S., Xumin, L. and Yong, G. Research on K-means clustering algorithm—an improved K-means clustering algorithm. In IEEE 3rd Int. Symposium on Intelligent Info. Technology and Security Informatics, pp. 63–67 (2010).
3. Xu, R. & Wunsch, D. Survey of clustering algorithms. IEEE Trans. Neural Netw. 16, 645–678 (2005).
4. Cheung, Y. M. A new generalized K-means clustering algorithm. Pattern Recogn. Lett. 24, 2883–2893 (2003).
5. Li, S. Cluster center initialization method for K-means algorithm over data sets with two clusters. Int. Conf. Adv. Eng. 24, 324–328 (2011).