Author:
Alaidi Abdul Hadi M.,Al_airaji Roa'a M.,Alrikabi Haider Th.Salim,Aljazaery Ibtisam A.,Abbood Saif Hameed
Abstract
Dark web is a canopy concept that denotes any kind of illicit activities carried out by anonymous persons or organizations, thereby making it difficult to trace. The illicit content on the dark web is constantly updated and changed. The collection and classification of such illegal activities are challenging tasks, as they are difficult and time-consuming. This problem has in recent times emerged as an issue that requires quick attention from both the industry and academia. To this end, efforts have been made in this article a crawler that is capable of collecting dark web pages, cleaning them, and saving them in a document database, is proposed. The crawler carries out an automatic classification of the gathered web pages into five classes. The classifiers used in classifying the pages include Linear Support Vector Classifier (SVC), Naïve Bayes (NB), and Document Frequency (TF-IDF). The experimental results revealed that an accuracy rate of 92% and 81% were achieved by SVC and NB, respectively.
Publisher
International Association of Online Engineering (IAOE)
Subject
Computer Networks and Communications,Computer Science Applications
Cited by
29 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献