1. Colby Banbury, Vijay Janapa Reddi, Peter Torelli, Jeremy Holleman, Nat Jeffries, Csaba Kiraly, Pietro Montino, David Kanter, Sebastian Ahmed, Danilo Pau, et al. 2021. Mlperf tiny benchmark. arXiv preprint arXiv:2106.07597 (2021).
2. Zhi Chen, Cody Hao Yu, Trevor Morris, Jorn Tuyls, Yi-Hsiang Lai, Jared Roesch, Elliott Delaye, Vin Sharma, and Yida Wang. 2021. Bring your own codegen to deep learning compiler. arXiv preprint arXiv:2105.03215 (2021).
3. Animesh Jain. [n. d.]. Convert Layout Pass. https://tvm.apache.org/docs/arch/convert_layout.html
4. Animesh Jain, Shoubhik Bhattacharya, Masahiro Masuda, Vin Sharma, and Yida Wang. 2020. Efficient execution of quantized deep learning models: A compiler approach. arXiv preprint arXiv:2006.10226 (2020).
5. M. J. Klaiber P. P. Bernardo and C. Gerum. 2022. Making your Hardware Accelerator TVM-ready with UMA. https://tvm.apache.org/docs/tutorial/uma.html