Addressing limitations of the K-means clustering algorithm: outliers, non-spherical data, and optimal cluster selection
-
Published:2024
Issue:9
Volume:9
Page:25070-25097
-
ISSN:2473-6988
-
Container-title:AIMS Mathematics
-
language:
-
Short-container-title:MATH
Author:
khan Iliyas Karim1, Daud Hanita Binti1, Zainuddin Nooraini binti1, Sokkalingam Rajalingam1, Abdussamad 1, Museeb Abdul1, Inayat Agha2
Affiliation:
1. Fundamental and Applied Science Department, Universiti Teknologi PETRONAS, Perak 32610, Malaysia 2. Department of Statistic University of Malakand Chakdara Lower Dir, Khyber Pakhtunkhwa Pakistan
Abstract
<p>Clustering is essential in data analysis, with K-means clustering being widely used for its simplicity and efficiency. However, several challenges can affect its performance, including the handling of outliers, the transformation of non-spherical data into a spherical form, and the selection of the optimal number of clusters. This paper addressed these challenges by developing and enhancing specific models. The primary objective was to improve the robustness and accuracy of K-means clustering in the presence of these issues. To handle outliers, this research employed the winsorization method, which uses threshold values to minimize the influence of extreme data points. For the transformation of non-spherical data into a spherical form, the KROMD method was introduced, which combines Manhattan distance with a Gaussian kernel. This approach ensured a more accurate representation of the data, facilitating better clustering performance. The third objective focused on enhancing the gap statistic for selecting the optimal number of clusters. This was achieved by standardizing the expected value of reference data using an exponential distribution, providing a more reliable criterion for determining the appropriate number of clusters. Experimental results demonstrated that the winsorization method effectively handles outliers, leading to improved clustering stability. The KROMD method significantly enhanced the accuracy of converting non-spherical data into spherical form, achieving an accuracy level of 0.83 percent and an execution time of 0.14 per second. Furthermore, the enhanced gap statistic method outperformed other techniques in selecting the optimal number of clusters, achieving an accuracy of 93.35 percent and an execution time of 0.1433 per second. These advancements collectively enhance the performance of K-means clustering, making it more robust and effective for complex data analysis tasks.</p>
Publisher
American Institute of Mathematical Sciences (AIMS)
Reference49 articles.
1. X. Du, Y. He, J. Z. Huang, Random sample partition-based clustering ensemble algorithm for big data, 2021 IEEE International Conference on Big Data (Big Data), 2021, 5885–5887. https://doi.org/10.1109/BigData52589.2021.9671297 2. B. Huang, Z. Liu, J. Chen, A. Liu, Q. Liu, Q. He, Behavior pattern clustering in blockchain networks, Multimed. Tools Appl., 76 (2017), 20099–20110. https://doi.org/10.1007/s11042-017-4396-4 3. Y. Djenouri, A. Belhadi, D. Djenouri, J. C. W. Lin, Cluster-based information retrieval using pattern mining, Appl. Intell., 51 (2021), 1888–1903. https://doi.org/10.1007/s10489-020-01922-x 4. C. Ouyang, C. Liao, D. Zhu, Y. Zheng, C. Zhou, C. Zou, Compound improved Harris hawks optimization for global and engineering optimization, Cluster Comput., 2024. https://doi.org/10.1007/s10586-024-04348-z 5. J. Xu, T. Li, D. Zhang, J. Wu, Ensemble clustering via fusing global and local structure information, Expert Syst. Appl., 237 (2024), 121557. https://doi.org/10.1016/j.eswa.2023.121557
|
|