Author:
Tipu Abdul Jabbar Saeed,Conbhuí Pádraig Ó,Howley Enda
Abstract
AbstractSuper-computing or HPC clusters are built to provide services to execute computationally complex applications. Generally, these HPC applications involve large scale IO (input/output) processing over the networked parallel file system disks. They are commonly developed on top of the C/C++ based MPI standard library. The HPC clusters MPI–IO performance significantly depends on the particular parameter value configurations, not generally considered when writing the algorithms or programs. Therefore, this leads to poor IO and overall program performance degradation. The IO is mostly left to individual practitioners to be optimised at code level. This usually leads to unexpected consequences due to IO bandwidth degradation which becomes inevitable as the file data scales in size to petabytes. To overcome the poor IO performance, this research paper presents an approach for auto-tuning of the configuration parameters by forecasting the MPI–IO bandwidth via artificial neural networks (ANNs), a machine learning (ML) technique. These parameters are related to MPI–IO library and lustre (parallel) file system. In addition to this, we have identified a number of common configurations out of numerous possibilities, selected in the auto-tuning process of READ/WRITE operations. These configurations caused an overall READ bandwidth improvement of 65.7% with almost 83% test cases improved. In addition, the overall WRITE bandwidth improved by 83% with number of test cases improved by almost 93%. This paper demonstrates that by using auto-tuning parameters via ANNs predictions, this can significantly impact overall IO bandwidth performance.
Publisher
Springer Science and Business Media LLC
Subject
Computer Networks and Communications,Software
Reference38 articles.
1. Pfister, G.F.: An introduction to the InfiniBand architecture. In: High Performance Mass Storage and Parallel I/O, vol. 42, pp. 617–632. Wiley, Hoboken (2001)
2. Birrittella, M.S., Debbage, M., Huggahalli, R., Kunz, J., Lovett, T., Rimmer, T., Underwood, K.D., Zak, R.C.: Intel® omni-path architecture: enabling scalable, high performance fabrics. In: 2015 IEEE 23rd Annual Symposium on High-Performance Interconnects, 2015, pp. 1–9. IEEE (2015)
3. Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A high-performance, portable implementation of the MPI message passing interface standard. Parallel Comput. 22(6), 789–828 (1996)
4. Koutoupis, P.: The Lustre distributed filesystem. Linux J. 2011(210), 3 (2011)
5. Li, Y., Li, H.: Optimization of parallel I/O for Cannon’s algorithm based on lustre. In: 2012 11th International Symposium on Distributed Computing and Applications to Business, Engineering and Science, 2012, pp. 31–35. IEEE (2012)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A2FL: Autonomous and Adaptive File Layout in HPC through Real-time Access Pattern Analysis;2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS);2024-05-27