1. T. Shi, Y. Keneshloo, N. Ramakrishnan, C.K. Reddy, Neural abstractive text summarization with sequence-to-sequence models. arXiv preprint arXiv:1812.02303 (2018)
2. D.K. Gaikwad, C. Namrata Mahender, A review paper on text summarization. Int. J. Adv. Res. Comput. Commun. Eng. 5(3), 154–160 (2016)
3. M.-T. Luong, Q.V. Le, I. Sutskever, O. Vinyals, L. Kaiser, Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114 (2015)
4. J. Pennington, R. Socher, C.D. Manning, Glove: global vectors for word representation, in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2014), pp. 1532–1543
5. T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)