1. Tom Bannink et al. 2021. Larq compute engine: Design benchmark and deploy state-of-the-art binarized neural networks. MLSys. Tom Bannink et al. 2021. Larq compute engine: Design benchmark and deploy state-of-the-art binarized neural networks. MLSys.
2. Yoshua Bengio et al. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Yoshua Bengio et al. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432.
3. Tianen Chen et al. 2022. SynthNet: A High-throughput yet Energy-efficient Combinational Logic Neural Network. ASP-DAC. Tianen Chen et al. 2022. SynthNet: A High-throughput yet Energy-efficient Combinational Logic Neural Network. ASP-DAC.
4. Xizi Chen et al. 2020. Tight compression: compressing CNN model tightly through unstructured pruning and simulated annealing based permutation. DAC. Xizi Chen et al. 2020. Tight compression: compressing CNN model tightly through unstructured pruning and simulated annealing based permutation. DAC.
5. Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse trainable neural networks. arXiv preprint arXiv:1803.03635. Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse trainable neural networks. arXiv preprint arXiv:1803.03635.