Publisher
Springer Science and Business Media LLC
Reference44 articles.
1. Arthur D, Vassilvitskii S (2006) k-means++: the advantages of careful seeding. In: Technical report, Stanford
2. Ayinde BO, Inanc T, Zurada JM (2019) Redundant feature pruning for accelerated inference in deep neural networks. Neural Netw 118:148–158
3. Basha SHS, Farazuddin M, Pulabaigari V, Dubey SR, Mukherjee S (2024) Deep model compression based on the training history. Neurocomputing 573:127257
4. Chung FRK (1997) Spectral graph theory, vol 92. American Mathematical Soc
5. Chung KL, Chang YL (2023) An Effective Backward Filter Pruning Algorithm Using K1, n Bipartite Graph-Based Clustering and the Decreasing Pruning Rate Approach. Authorea Preprints