Affiliation:
1. National University of Singapore, Singapore, Singapore
Abstract
Large, sparse categorical data is a natural way to represent complex data like sequences, trees, and graphs. Such data is prevalent in many applications, e.g., Criteo released a terabyte size click log data of 4 billion records with millions of dimensions. While most existing clustering algorithms like k-Means work well on dense, numerical data, there exist relatively few algorithms that can cluster sets of sparse categorical features.
In this paper, we propose a new method called k-FreqItems that performs scalable clustering over high-dimensional, sparse data. To make clustering results easily interpretable, k-FreqItems is built upon a novel sparse center representation called FreqItem which will choose a set of high-frequency, non-zero dimensions to represent the cluster. Unlike most existing clustering algorithms, which adopt Euclidean distance as the similarity measure, k-FreqItems uses the popular Jaccard distance for comparing sets.
Since the efficiency and effectiveness of k-FreqItems are highly dependent on an initial set of representative seeds, we introduce a new randomized initialization method, SILK, to deal with the seeding problem of k-FreqItems. SILK uses locality-sensitive hash (LSH) functions for oversampling and identifies frequently co-occurred data in LSH buckets to determine a set of promising seeds, allowing k-FreqItems to converge swiftly in an iterative process. Experimental results over seven real-world sparse data sets show that the SILK seeding is around 1.1\sim3.2× faster yet more effective than the state-of-the-art seeding methods. Notably, SILK scales up well to a billion data objects on a commodity machine with 4 GPUs. The code is available at https://github.com/HuangQiang/k-FreqItems.
Funder
National University of Singapore Centre for Trusted Internet and Community Industry Collaborative Projects
National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative
Publisher
Association for Computing Machinery (ACM)
Reference105 articles.
1. Rakesh Agrawal and Ramakrishnan Srikant. 1994. Fast algorithms for mining association rules. In VLDB. 487--499. Rakesh Agrawal and Ramakrishnan Srikant. 1994. Fast algorithms for mining association rules. In VLDB. 487--499.
2. Salem Alelyani , Jiliang Tang , and Huan Liu . 2018. Feature selection for clustering: A review. Data Clustering ( 2018 ), 29--60. Salem Alelyani, Jiliang Tang, and Huan Liu. 2018. Feature selection for clustering: A review. Data Clustering (2018), 29--60.
3. Alexandr Andoni . 2005. E2 LSH 0.1 User Manual . http://web.mit.edu/andoni/www/LSH/index.html ( 2005 ). Alexandr Andoni. 2005. E2LSH 0.1 User Manual. http://web.mit.edu/andoni/www/LSH/index.html (2005).
4. Alexandr Andoni and Piotr Indyk. 2006. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In FOCS. 459--468. Alexandr Andoni and Piotr Indyk. 2006. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In FOCS. 459--468.
5. Alexandr Andoni Piotr Indyk Thijs Laarhoven Ilya Razenshteyn and Ludwig Schmidt. 2015. Practical and optimal LSH for angular distance. In NIPS. 1225--1233. Alexandr Andoni Piotr Indyk Thijs Laarhoven Ilya Razenshteyn and Ludwig Schmidt. 2015. Practical and optimal LSH for angular distance. In NIPS. 1225--1233.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. The Effect of K - Means Clustering on Collaborative Filtering in Book Recommendation;2024 12th International Conference on Information and Education Technology (ICIET);2024-03-18