Publisher
Association for Natural Language Processing
Reference9 articles.
1. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019).“BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of NAACL, pp. 4171–4186.
2. Gururangan, S., Swayamdipta, S., Levy, O., Schwartz, R., Bowman, S., and Smith, N. A. (2018).“Annotation Artifacts in Natural Language Inference Data.” In Proceedings of NAACL, pp. 107–112.
3. Inoue, N., Stenetorp, P., and Inui, K. (2020).“R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason.” In Proceedings of ACL, p. To appear.
4. Kavumba, P., Inoue, N., Heinzerling, B., Singh, K., Reisert, P., and Inui, K. (2019).“When Choosing Plausible Alternatives, Clever Hans can be Clever.” In Proceedings of COIN, pp. 33–42.
5. Mudrakarta, P. K., Taly, A., Sundararajan, M., and Dhamdhere, K. (2018).“Did the Model Understand the Question?” In Proceedings of ACL, pp. 1896–1906.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献