Abstract
In this chapter, the authors present hierarchical matrices, which are a powerful numerical tool that allows reducing to a logarithmic order both the storage needs and computational time in exchange for a controlled accuracy loss, thanks to the compression of part of the original data to form low-rank blocks. This type of matrices presents certain particularities due to the storage layout, the different blocks configurations, and a hierarchically and nested partitioned structure of blocks; the presence of dense and low-rank blocks of various dimensions; and the recursive nature of the algorithms that compute the h-algebra operations. Thanks to the programming model OmpSs-2 and specifically to two novel features it incorporates, a fair parallel efficiency based on task-parallelism can be achieved in shared memory environments.
Reference21 articles.
1. A unified task-parallel approach for dense, hierarchical and sparse linear systems.;J. I.Aliaga;XXV Congreso de Ecuaciones Diferenciales y Aplicaciones + XV Congreso de Matemática Aplicada (CEDYA+CMA17),2017
2. Task-Parallel LU Factorization of Hierarchical Matrices Using OmpSs
3. Parallel Solution of Hierarchical Symmetric Positive Definite Linear Systems
4. BebendorfM. (2008). Hierarchical matrices: A means to efficiently solve elliptic boundary value problems. Springer.
5. Exploiting nested task-parallelism in the H-LU factorization