Author:
Sinaga Lotar Mateus,Sawaluddin ,Suwilo Saib
Abstract
Abstract
Naïve Bayes is a prediction method that contains a simple probabilistic that is based on the application of the Bayes theorem (Bayes rule) with the assumption that the dependence is strong. K-Nearest Neighbor (K-NN) is a group of instance-based learning, K-NN is also a lazy learning technique by searching groups of k objects in training data that are closest (similar) to objects on new data or testing data. Classification is a technique in Data mining to form a model from a predetermined data set. Data mining techniques are the choices that can be overcome in solving this problem. The results of the two different classification algorithms result in the discovery of better and more efficient algorithms for future use. It is recommended to use different datasets to analyze comparisons of naïve bayes and K-NN algorithms. the writer formulates the problem so that the research becomes more directed. The formulation of the problem in this study is to find the value of accuracy in the Naïve Bayes and KNN algorithms in classifying data.
Reference9 articles.
1. Models and Managers: The Concept Of Decision Calculus;Little;Management Science,1970
2. Feature Selection for Unbalanced Class Distribution and Naive Bayes;Mladenic,1999
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献