1. Dynamic rectification knowledge distillation;Amik,2022
2. Efficient 8-bit quantization of transformer neural machine language translation model;Bhandare,2019
3. Chen, J., Chen, S., Pan, S.J., 2020a. Storage efficient and dynamic flexible runtime channel pruning via deep reinforcement learning. In: International Conference on Neural Information Processing Systems. (NIPS), pp. 14747–14758.
4. Chen, H., Wang, Y., Shu, H., et al., 2020b. Distilling portable generative adversarial networks for image translation. In: Conference on Artificial Intelligence. (AAAI), pp. 3585–3592.
5. Autoaugment: learning augmentation policies from data;Cubuk,2018