Publisher
Springer Nature Switzerland
Reference27 articles.
1. Arik, S.Ö., Pfister, T.: Tabnet: attentive interpretable tabular learning. ArXiv arxiv:1908.07442 (2019)
2. Baldi, P., Sadowski, P., Whiteson, D.: Searching for exotic particles in high-energy physics with deep learning. Nat. Commun. 5, 4308 (2014)
3. Borisov, V., Leemann, T., Seßler, K., Haug, J., Pawelczyk, M., Kasneci, G.: Deep neural networks and tabular data: a survey. IEEE Trans. Neural Netw. Learn. Syst. 35, 7499–7519 (2022). https://doi.org/10.1109/TNNLS.2022.3229161
4. Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 782–791 (2021)
5. Chen, J., Liao, K.Y., Wan, Y., Chen, D.Z., Wu, J.: Danets: deep abstract networks for tabular data classification and regression. ArXiv arxiv:2112.02962 (2021)