Abstract
AbstractThe high demand generated by the information age has led to recent breakthroughs in both extractive and abstractive text summarisation. This work explores the algorithms that were the product of these advances, focusing on the domain of sports news summarisation. By creating a new hybrid evaluation system that incorporates automatic evaluation metrics, such as ROUGE and BLEU scores, with human evaluation, we observe that abstractive techniques return the best results in the sports domain. This also generalises to the domain of political articles. However, here the metrics report lower scores across most algorithms. Another finding is that the algorithms considered perform independently of the dialect of English used.
Publisher
Springer Nature Switzerland
Reference17 articles.
1. Saggion, H., Poibeau, T.: Multi-source, Multilingual Information Extraction and Summarization, pp. 3–13. Springer (2012). https://doi.org/10.1007/978-3-642-28569-1
2. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the ACL (2002)
3. Lin, C.-Y.: ROUGE: a package for automatic evaluation of summaries. In: Proceedings of ACL Workshop on Text Summarization Branches Out, vol. 10 (2004)
4. Luhn, H.P.: The automatic creation of literature abstracts. IBM J. Res. Dev. 2, 159–165 (1958)
5. Huang, D., et al.: What have we achieved on text summarization? (2020)