Abstract
AbstractThe diversity of services and infrastructure in metropolitan edge-to-cloud network(s) is rising to unprecedented levels. This is causing a rising threat of a wider range of cyber attacks coupled with a growing integration of a constrained range of infrastructure, particularly seen at the network edge. Deep reinforcement-based learning is an attractive approach to detecting attacks, as it allows less dependency on labeled data with better ability to classify different attacks. However, current approaches to learning are known to be computationally expensive (cost), and the learning experience can be negatively impacted by the presence of outliers and noise (quality). This work tackles both the cost and quality challenges with a novel service-based federated deep reinforcement learning solution, enabling anomaly detection and attack classification at a reduced data cost and with better quality. The federated settings in the proposed approach enable multiple edge units to create clusters that follow a bottom-up learning approach. The proposed solution adapts a deep Q-learning network (DQN) for service-tunable flow classification and introduces a novel federated DQN (FDQN) for federated learning. Through such targeted training and validation, variation in data patterns and noise is reduced. This leads to improved performance per service with lower training cost. Performance and cost of the solution, along with sensitivity to exploration parameters, are evaluated using examples of publicly available datasets (UNSW-NB15 and CIC-IDS2018). Evaluation results show the proposed solution to maintain detection accuracy in the range of ≈75–85% with lower data supply while improving the classification rate by a factor of ≈2.
Funder
Uppsala Multidisciplinary Center for Advanced Computational Science
Publisher
Springer Science and Business Media LLC
Subject
Electrical and Electronic Engineering