Parameterized Complexity of Feature Selection for Categorical Data Clustering

Author:

Bandyapadhyay Sayan1ORCID,Fomin Fedor V.2ORCID,Golovach Petr A.2ORCID,Simonov Kirill3ORCID

Affiliation:

1. Department of Computer Science, Portland State University, USA

2. Department of Informatics, University of Bergen, Norway

3. Hasso Plattner Institute, University of Potsdam, Germany

Abstract

We develop new algorithmic methods with provable guarantees for feature selection in regard to categorical data clustering. While feature selection is one of the most common approaches to reduce dimensionality in practice, most of the known feature selection methods are heuristics. We study the following mathematical model. We assume that there are some inadvertent (or undesirable) features of the input data that unnecessarily increase the cost of clustering. Consequently, we want to select a subset of the original features from the data such that there is a small-cost clustering on the selected features. More precisely, for given integers ℓ (the number of irrelevant features) and k (the number of clusters), budget B , and a set of n categorical data points (represented by m -dimensional vectors whose elements belong to a finite set of values Σ), we want to select m -ℓ relevant features such that the cost of any optimal k -clustering on these features does not exceed B . Here the cost of a cluster is the sum of Hamming distances (ℓ 0 -distances) between the selected features of the elements of the cluster and its center. The clustering cost is the total sum of the costs of the clusters. We use the framework of parameterized complexity to identify how the complexity of the problem depends on parameters k , B , and |Σ|. Our main result is an algorithm that solves the Feature Selection problem in time f ( k, B , |Σ|)⋅ m g ( k , |Σ|) n 2 for some functions f and g . In other words, the problem is fixed-parameter tractable parameterized by B when |Σ| and k are constants. Our algorithm for Feature Selection is based on a solution to a more general problem, Constrained Clustering with Outliers. In this problem, we want to delete a certain number of outliers such that the remaining points could be clustered around centers satisfying specific constraints. One interesting fact about Constrained Clustering with Outliers is that besides Feature Selection, it encompasses many other fundamental problems regarding categorical data such as Robust Clustering and Binary and Boolean Low-rank Matrix Approximation with Outliers. Thus, as a byproduct of our theorem, we obtain algorithms for all these problems. We also complement our algorithmic findings with complexity lower bounds.

Funder

Norway RCN

DFG DFG Research

Publisher

Association for Computing Machinery (ACM)

Subject

Computational Theory and Mathematics,Theoretical Computer Science

Reference36 articles.

1. Salem Alelyani, Jiliang Tang, and Huan Liu. 2013. Feature selection for clustering: A review. In Data Clustering: Algorithms and Applications, Charu C. Aggarwal and Chandan K. Reddy (Eds.). CRC Press, 30–373.

2. Color-coding

3. Fast probabilistic algorithms for hamiltonian circuits and matchings

4. A PTAS for ℓp-Low Rank Approximation

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3