Abstract
AbstractTransformer-based language models (TLMs) have widely been recognized to be a cutting-edge technology for the successful development of deep-learning-based solutions to problems and applications that require natural language processing and understanding. Like for other textual domains, TLMs have indeed pushed the state-of-the-art of AI approaches for many tasks of interest in the legal domain. Despite the first Transformer model being proposed about six years ago, there has been a rapid progress of this technology at an unprecedented rate, whereby BERT and related models represent a major reference, also in the legal domain. This article provides the first systematic overview of TLM-based methods for AI-driven problems and tasks in the legal sphere. A major goal is to highlight research advances in this field so as to understand, on the one hand, how the Transformers have contributed to the success of AI in supporting legal processes, and on the other hand, what are the current limitations and opportunities for further research development.
Funder
Università della Calabria
Publisher
Springer Science and Business Media LLC
Subject
Law,Artificial Intelligence
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Classifying proportionality - identification of a legal argument;Artificial Intelligence and Law;2024-08-03
2. Advancing Faithfulness of Large Language Models in Goal-Oriented Dialogue Question Answering;ACM Conversational User Interfaces 2024;2024-07-08
3. FORMATION OF HIGHLY SPECIALIZED CHATBOTS FOR ADVANCED SEARCH;Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska;2024-03-31
4. Optimization of Natural Language Processing Models for Multilingual Legal Document Analysis;2024 Third International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS);2024-03-14