1. Chen J, Ran X (2019) Deep learning with edge computing: a review. Proc IEEE 107(8):1655–1674
2. Dauphin YN, Bengio Y (2013) Big neural networks waste capacity. In: 1st international conference on learning representations, Scottsdale, Arizona, USA
3. Frankle J, Carbin M J (2019) The lottery ticket hypothesis: finding sparse, trainable neural networks. In: 7th international conference on learning representations, New Orleans, LA, USA
4. Deng L, Li G, Han S et al (2020) Model compression and hardware acceleration for neural networks: a comprehensive survey. Proc IEEE 108(4):485–532
5. Iandola FN, Han S, Moskewicz MW, et al (2016) SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arxiv abs/1602.07360