Variable Scale Pruning for Transformer Model Compression in End-to-End Speech Recognition
-
Published:2023-08-23
Issue:9
Volume:16
Page:398
-
ISSN:1999-4893
-
Container-title:Algorithms
-
language:en
-
Short-container-title:Algorithms
Author:
Ben Letaifa Leila12ORCID, Rouas Jean-Luc2ORCID
Affiliation:
1. LINEACT, UR-EA 7527, CESI Nancy, 54500 Vandœuvre-lès-Nancy, France 2. LaBRI, CNRS UMR 5800, University of Bordeaux, Bordeaux INP, 33405 Talence, France
Abstract
Transformer models are being increasingly used in end-to-end speech recognition systems for their performance. However, their substantial size poses challenges for deploying them in real-world applications. These models heavily rely on attention and feedforward layers, with the latter containing a vast number of parameters that significantly contribute to the model’s memory footprint. Consequently, it becomes pertinent to consider pruning these layers to reduce the model’s size. In this article, our primary focus is on the feedforward layers. We conduct a comprehensive analysis of their parameter count and distribution. Specifically, we examine the weight distribution within each layer and observe how the weight values progress across the transformer model’s blocks. Our findings demonstrate a correlation between the depth of the feedforward layers and the magnitude of their weights. Consequently, layers with higher weight values require less pruning. Building upon this insight, we propose a novel pruning algorithm based on variable rates. This approach sets the pruning rate according to the significance and location of each feedforward layer within the network. To evaluate our new pruning method, we conduct experiments on various datasets. The results reveal its superiority over conventional pruning techniques, such as local pruning and global pruning.
Funder
European Union’s Horizon 2020 Research and Innovation action
Subject
Computational Mathematics,Computational Theory and Mathematics,Numerical Analysis,Theoretical Computer Science
Reference45 articles.
1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017, January 4–9). Attention Is All You Need. Proceedings of the Advances in NIPS, 2017, Long Beach, CA, USA. 2. Han, S., Pool, J., Tran, J., and Dally, W.J. (2015, January 7–12). Learning both Weights and Connections for Efficient Neural Networks. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, USA. 3. Perceptual Borderline for Balancing Multi-Class Spontaneous Emotional Data;Letaifa;IEEE Access,2021 4. Watanabe, S., Hori, T., Karita, S., Hayashi, T., Nishitoba, J., Unno, Y., Soplin, N.E.Y., Heymann, J., Wiesner, M., and Chen, N. (2018, January 2–6). Espnet: End-to-End Speech Processing Toolkit. Proceedings of the INTERSPEECH, Hyderabad, India. 5. Rabiner, L., and Juang, B.H. (1993). Fundamentals of Speech Recognition, Prentice Hall.
|
|