1. Anders, C.J., Weber, L., Neumann, D., Samek, W., Müller, K.R., Lapuschkin, S.: Finding and removing clever hans: using explanation methods to debug and improve deep models. Information Fusion 77, 261–295 (2022)
2. Bao, H., Dong, L., Wei, F.: BEIT: BERT pre-training of image transformers. arXiv preprint arXiv:2106.08254 (2021)
3. Bykov, K., Deb, M., Grinwald, D., Muller, K.R., Höhne, M.M.: DORA: exploring outlier representations in deep neural networks. Trans. Mach. Learn. Res. (2023). https://openreview.net/forum?id=nfYwRIezvg
4. Da, J.: A corpus-based study of character and bigram frequencies in chinese e-texts and its implications for chinese language instruction. In: Proceedings of the Fourth International Conference on New Technologies in Teaching and Learning Chinese, pp. 501–511. Citeseer (2004)
5. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)