Affiliation:
1. Institute for Mathematics University of Potsdam Potsdam Germany
2. Institute for Mathematics Martin Luther University of Halle‐Wittenberg Halle (Saale) Germany
Abstract
AbstractNeural networks have emerged as powerful and versatile tools in the field of deep learning. As the complexity of the task increases, so do size and architectural complexity of the network, causing compression techniques to become a focus of current research. Parameter truncation can provide a significant reduction in memory and computational complexity. Originating from a model order reduction framework, the Discrete Empirical Interpolation Method is applied to the gradient descent training of neural networks and analyze for important parameters. The approach for various state‐of‐the‐art neural networks is compared to established truncation methods. Further metrics like L2 and Cross‐Entropy Loss, as well as accuracy and compression rate are reported.
Funder
Deutsche Forschungsgemeinschaft
Subject
Electrical and Electronic Engineering,Atomic and Molecular Physics, and Optics
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Model Order Reduction Using Cuckoo Search Optimization Technique;2023 International Conference on Ambient Intelligence, Knowledge Informatics and Industrial Electronics (AIKIIE);2023-11-02