Affiliation:
1. Computer Science Department University of Guelma Guelma Algeria
2. Computer Science Department University of Setif 1 Setif Algeria
Abstract
AbstractThe curse of high dimensionality in text classification is a worrisome problem that requires efficient and optimal feature selection (FS) methods to improve classification accuracy and reduce learning time. Existing filter‐based FS methods evaluate features independently of other related ones, which can then lead to selecting a large number of redundant features, especially in high‐dimensional datasets, resulting in more learning time and less classification performance, whereas information theory‐based methods aim to maximize feature dependency with the class variable and minimize its redundancy for all selected features, which gradually becomes impractical when increasing the feature space. To overcome the time complexity issue of information theory‐based methods while taking into account the redundancy issue, in this article, we propose a new feature selection method for text classification termed correlation‐based redundancy removal, which aims to minimize the redundancy using subsets of features having close mutual information scores without sequentially seeking already selected features. The idea is that it is not important to assess the redundancy of a dominant feature having high classification information with another irrelevant feature having low classification information and vice‐versa since they are implicitly weakly correlated. Our method, tested on seven datasets using both traditional classifiers (Naive Bayes and support vector machines) and deep learning models (long short‐term memory and convolutional neural networks), demonstrated strong performance by reducing redundancy and improving classification compared to ten competitive metrics.
Subject
Artificial Intelligence,Computational Mathematics
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献