Author:
Wang Xiaoning,Liu Jia,Zhang Chunjiong
Abstract
AbstractDifferent types of network traffic can be treated as data originating from different domains with the same objectives of problem-solving. Previous work utilizing multi-domain machine learning has primarily assumed that data in different domains have the same distribution, which fails to effectively address the domain offset problem and may not achieve excellent performance in every domain. To address these limitations, this study proposes an attention-based bidirectional long short-term memory (Bi-LSTM) model for detecting coordinated network attacks, such as malware detection, VPN encapsulation recognition, and Trojan horse classification. To begin, HTTP traffic is modeled as a series of natural language sequences, where each request follows strict structural standards and language logic. The Bi-LSTM model is designed within the framework of multi-domain machine learning technologies to recognize anomalies of network attacks from different domains. Experiments on real HTTP traffic data sets demonstrate that the proposed model has good performance in detecting abnormal network traffic and exhibits strong generalization ability, enabling it to effectively detect different network attacks simultaneously.
Publisher
Springer Science and Business Media LLC
Subject
Computer Science Applications,Signal Processing
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献