1. Inoue K, Expressive numbers of two or more hidden layer ReLU neural networks, 2019 Seventh International Symposium on Computing and Networking workshops (CANDARW 2019), 2019.
2. Hooker S, Erhan D, Kidermans P, et al., A Benchmark for Interpretability Methods in Deep Neural Networks, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, 2019.
3. Fan F, Xiong J, and Wang G, On interpretability of artificial neural networks, https://arxiv.org/abs/2001.02522, 2020.
4. Wu S, Dimakis A G, and Sanghavi S, Learning distributions generated by one-layer ReLU networks, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, 2019.
5. Croce F, Andriushchenko M, and Hein M, Provable robustness of ReLU networks via maximization of linear regions, Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS 2019), Naha, Okinawa, Japan, 2019.