Affiliation:
1. Melbourne Institute of Technology (MIT), Melbourne, VIC 3000, Australia
2. Institute of Innovation, Science and Sustainability, Federation University Australia, Ballarat, VIC 3350, Australia
Abstract
With the rapid advancement of the Internet of Things (IoT), there is a global surge in network traffic. Software-Defined Networks (SDNs) provide a holistic network perspective, facilitating software-based traffic analysis, and are more suitable to handle dynamic loads than a traditional network. The standard SDN architecture control plane has been designed for a single controller or multiple distributed controllers; however, a logically centralized single controller faces severe bottleneck issues. Most proposed solutions in the literature are based on the static deployment of multiple controllers without the consideration of flow fluctuations and traffic bursts, which ultimately leads to a lack of load balancing among controllers in real time, resulting in increased network latency. Moreover, some methods addressing dynamic controller mapping in multi-controller SDNs consider load fluctuation and latency but face controller placement problems. Earlier, we proposed priority scheduling and congestion control algorithm (eSDN) and dynamic mapping of controllers for dynamic SDN (dSDN) to address this issue. However, the future growth of IoT is unpredictable and potentially exponential; to accommodate this futuristic trend, we need an intelligent solution to handle the complexity of growing heterogeneous devices and minimize network latency. Therefore, this paper continues our previous research and proposes temporal deep Q learning in the dSDN controller. A Temporal Deep Q learning Network (tDQN) serves as a self-learning reinforcement-based model. The agent in the tDQN learns to improve decision-making for switch-controller mapping through a reward–punish scheme, maximizing the goal of reducing network latency during the iterative learning process. Our approach—tDQN—effectively addresses dynamic flow mapping and latency optimization without increasing the number of optimally placed controllers. A multi-objective optimization problem for flow fluctuation is formulated to divert the traffic to the best-suited controller dynamically. Extensive simulation results with varied network scenarios and traffic show that the tDQN outperforms traditional networks, eSDNs, and dSDNs in terms of throughput, delay, jitter, packet delivery ratio, and packet loss.
Reference40 articles.
1. A QoS-aware data collection protocol for LLNs in fog-enabled Internet of Things;Hosen;IEEE Trans. Netw. Serv. Manag.,2019
2. Joint computation and communication cooperation for energy-efficient mobile edge computing;Cao;IEEE Internet Things J.,2018
3. Big data analytics for large-scale wireless networks: Challenges and opportunities;Dai;ACM Comput. Surv. (CSUR),2019
4. Fisher, W., Suchara, M., and Rexford, J. (2010). First ACM SIGCOMM Workshop on Green Networking, in Green Networking ’10, Association for Computing Machinery.
5. Mahadevan, P., Sharma, P., Banerjee, S., and Ranganathan, P. (2009, January 11–15). A Power Benchmarking Framework For Network Devices. Proceedings of the NETWORKING 2009: 8th International IFIP-TC 6 Networking Conference, Aachen, Germany.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献