1. R. Mihalcea, P. Tarau, in Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Text rank: bringing order into text (Association for Computational LinguisticsBarcelona, Spain, 2004), pp. 404–411. https://www.aclweb.org/anthology/W04-3252.
2. M. Yasunaga, Z. Rui, K. Meelu, A. Pareek, D. Radev, Graph-based neural multi-document summarization. CoRR. abs/1706.06681: (2017). http://arxiv.org/abs/1706.06681.
3. A. M. Rush, S. Chopra, J. Weston, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. A neural attention model for abstractive sentence summarization (Association for Computational LinguisticsLisbon, Portugal, 2015), pp. 379–389. https://www.aclweb.org/anthology/D15-1044 . https://doi.org/10.18653/v1/D15-1044.
4. R. Nallapati, B. Xiang, B. Zhou, Sequence-to-sequence rnns for text summarization. CoRR. abs/1602.06023: (2016). http://arxiv.org/abs/1602.06023.
5. I. Sutskever, O. Vinyals, Q. V. Le, in Advances in Neural Information Processing Systems 27. Sequence to sequence learning with neural networks (Curran Associates, Inc., 2014), pp. 3104–3112. http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf.