Author:
Srisuradetchai Patchanok,Suksrikran Korn
Abstract
The k-nearest neighbors (KNN) regression method, known for its nonparametric nature, is highly valued for its simplicity and its effectiveness in handling complex structured data, particularly in big data contexts. However, this method is susceptible to overfitting and fit discontinuity, which present significant challenges. This paper introduces the random kernel k-nearest neighbors (RK-KNN) regression as a novel approach that is well-suited for big data applications. It integrates kernel smoothing with bootstrap sampling to enhance prediction accuracy and the robustness of the model. This method aggregates multiple predictions using random sampling from the training dataset and selects subsets of input variables for kernel KNN (K-KNN). A comprehensive evaluation of RK-KNN on 15 diverse datasets, employing various kernel functions including Gaussian and Epanechnikov, demonstrates its superior performance. When compared to standard KNN and the random KNN (R-KNN) models, it significantly reduces the root mean square error (RMSE) and mean absolute error, as well as improving R-squared values. The RK-KNN variant that employs a specific kernel function yielding the lowest RMSE will be benchmarked against state-of-the-art methods, including support vector regression, artificial neural networks, and random forests.
Reference56 articles.
1. “Towards highly-efficient k-nearest neighbor algorithm for big data classification,”;Abdalla,2022
2. KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework;Alcalá-Fdez;J. Mult.-Valued Log. Soft Comput,2011
3. A k-nearest neighbours based ensemble via optimal model selection for regression;Ali;IEEE Access,2020
4. An introduction to kernel and nearest-neighbor nonparametric regression;Altman;Am. Stat,1992
5. Nearest neighbor classification from multiple feature subsets;Bay;Intell. Data Anal,1999
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献