Author:
Beauchemin David,Saggion Horacio,Khoury Richard
Abstract
In the field of automatic text simplification, assessing whether or not the meaning of the original text has been preserved during simplification is of paramount importance. Metrics relying on n-gram overlap assessment may struggle to deal with simplifications which replace complex phrases with their simpler paraphrases. Current evaluation metrics for meaning preservation based on large language models (LLMs), such as BertScore in machine translation or QuestEval in summarization, have been proposed. However, none has a strong correlation with human judgment of meaning preservation. Moreover, such metrics have not been assessed in the context of text simplification research. In this study, we present a meta-evaluation of several metrics we apply to measure content similarity in text simplification. We also show that the metrics are unable to pass two trivial, inexpensive content preservation tests. Another contribution of this study is MeaningBERT (https://github.com/GRAAL-Research/MeaningBERT), a new trainable metric designed to assess meaning preservation between two sentences in text simplification, showing how it correlates with human judgment. To demonstrate its quality and versatility, we will also present a compilation of datasets used to assess meaning preservation and benchmark our study against a large selection of popular metrics.
Funder
Natural Sciences and Engineering Research Council of Canada
Fonds de recherche du Québec - Nature et Technologies
Reference36 articles.
1. “ASSET: a dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations,”;Alva-Manchego;Annual Meeting of the Association for Computational Linguistics,2020
2. The (un)suitability of automatic evaluation metrics for text simplification;Alva-Manchego;Comput. Linguist,2021
3. “METEOR: an automatic metric for MT evaluation with improved correlation with human judgments,”;Banerjee;Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization,2005
4. Language models are few-shot learners;Brown;Adv. Neural Inf. Process. Syst,2020
5. “BERT: pre-training of deep bidirectional transformers for language understanding,”;Devlin;Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,2019