Affiliation:
1. College of Computer, National University of Defense Technology, P.R. China
2. College of Information System and Management, National University of Defense Technology, P.R. China
Abstract
The graphic processing unit (GPU) can perform some large-scale simulations in an economical way. However, harnessing the power of a GPU to discrete event simulation (DES) is difficult because of the mismatch between GPU’s synchronous execution mode and DES’s asynchronous time advance mechanism. In this paper, we present a GPU-based simulation kernel (gDES) to support DES and propose three algorithms to support high efficiency. Since both limited parallelism and redundant synchronization affect the performance of DES based on a GPU, we propose a breadth-expansion conservative time window algorithm to increase the degree of parallelism while retaining the number of synchronizations. By using the expansion method, it can import as many as possible ‘safe’ events. The irregular and dynamic requirement for storing the events leads to uneven and sparse memory usage, thereby causing waste of memory and unnecessary overhead. A memory management algorithm is proposed to store events in a balanced and compact way by using a lightweight stochastic method. When events processed by threads in a warp have different types, the performance of gDES decreases rapidly because of branch divergence. An event redistribution algorithm is proposed by reassigning events of the same type to neighboring threads to reduce the probability of branch divergence. We analyze the superiority of the proposed algorithms and gDES with a series of experiments. Compared to a CPU-based simulator on a multicore platform, the gDES can achieve up to 11×, 5×, and 8× speedup in PHOLD, QUEUING NETWORK, and epidemic simulation, respectively.
Subject
Computer Graphics and Computer-Aided Design,Modeling and Simulation,Software
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献