Large language models "ad referendum": How good are they at machine translation in the legal domain?
-
Published:2024-05-31
Issue:16
Volume:
Page:75-107
-
ISSN:1989-9335
-
Container-title:MonTI. Monografías de Traducción e Interpretación
-
language:
-
Short-container-title:MonTI
Author:
Briva-Iglesias Vicent,Dogru Gokhan,Cavalheiro Camargo João Lucas
Abstract
This study evaluates the machine translation (MT) quality of two state-of-the-art large language models (LLMs) against a traditional neural machine translation (NMT) system across four language pairs in the legal domain. It combines automatic evaluation metrics (AEMs) and human evaluation (HE) by professional translators to assess translation ranking, fluency and adequacy. The results indicate that while Google Translate generally outperforms LLMs in AEMs, human evaluators rate LLMs, especially GPT-4, comparably or slightly better in terms of producing contextually adequate and fluent translations. This discrepancy suggests LLMs' potential in handling specialized legal terminology and context, highlighting the importance of human evaluation methods in assessing MT quality. The study underscores the evolving capabilities of LLMs in specialized domains and calls for reevaluation of traditional AEMs to better capture the nuances of LLM-generated translations.
Publisher
Universitat Jaume I