Affiliation:
1. Institute of Computing Technology, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China
Funder
National Natural Science Foundation of China under Grant
Project of Youth Innovation Promotion Association CAS, Beijing Nova Program
Reference37 articles.
1. Bing Bai Jian Liang Guanhua Zhang Hao Li Kun Bai and FeiWang. 2021. Why Attentions May Not Be Interpretable?. In KDD. 25--34. Bing Bai Jian Liang Guanhua Zhang Hao Li Kun Bai and FeiWang. 2021. Why Attentions May Not Be Interpretable?. In KDD. 25--34.
2. Jasmijn Bastings and Katja Filippova. 2020. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?. In EMNLP (Workshop). 149--155. Jasmijn Bastings and Katja Filippova. 2020. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?. In EMNLP (Workshop). 149--155.
3. Bernardo Branco , Pedro Abreu , Ana Sofia Gomes , Mariana S. C. Almeida, João Tiago Ascensão, and Pedro Bizarro. 2020 . Interleaved Sequence RNNs for Fraud Detection. In KDD. 3101--3109. Bernardo Branco, Pedro Abreu, Ana Sofia Gomes, Mariana S. C. Almeida, João Tiago Ascensão, and Pedro Bizarro. 2020. Interleaved Sequence RNNs for Fraud Detection. In KDD. 3101--3109.
4. A simple plug-in bagging ensemble based on threshold-moving for classifying binary and multiclass imbalanced data
5. Enyan Dai and Suhang Wang. 2021. Towards Self-Explainable Graph Neural Network. In CIKM. 302--311. Enyan Dai and Suhang Wang. 2021. Towards Self-Explainable Graph Neural Network. In CIKM. 302--311.
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献