Author:
Laskar Md Tahmid Rahman,Hoque Enamul,Huang Jimmy
Publisher
Springer International Publishing
Reference13 articles.
1. Baumel, T., et al.: Query focused abstractive summarization: Incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models. arXiv preprint arXiv:1801.07704 (2018)
2. Devlin, J., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019)
3. Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019)
4. Liu, Y., Lapata, M.: Text summarization with pretrained encoders. In: Proceedings of EMNLP-IJCNLP, pp. 3721–3731 (2019)
5. Liu, Y., et al.: ARSA: a sentiment-aware model for predicting sales performance using blogs. In: Proceedings of SIGIR, pp. 607–614 (2007)
Cited by
21 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献