Author:
Paaß Gerhard,Giesselbach Sven
Abstract
AbstractThis chapter presents the main architecture types of attention-based language models, which describe the distribution of tokens in texts: Autoencoders similar to BERT receive an input text and produce a contextual embedding for each token. Autoregressive language models similar to GPT receive a subsequence of tokens as input. They produce a contextual embedding for each token and predict the next token. In this way, all tokens of a text can successively be generated. Transformer Encoder-Decoders have the task to translate an input sequence to another sequence, e.g. for language translation. First they generate a contextual embedding for each input token by an autoencoder. Then these embeddings are used as input to an autoregressive language model, which sequentially generates the output sequence tokens. These models are usually pre-trained on a large general training set and often fine-tuned for a specific task. Therefore, they are collectively called Pre-trained Language Models (PLM). When the number of parameters of these models gets large, they often can be instructed by prompts and are called Foundation Models. In further sections we described details on optimization and regularization methods used for training. Finally, we analyze the uncertainty of model predictions and how predictions may be explained.
Publisher
Springer International Publishing
Reference166 articles.
1. A. Abujabal, R. S. Roy, M. Yahya, and G. Weikum. “Quint: Interpretable Question Answering over Knowledge Bases”. In: Proc. 2017 Conf. Empir. Methods Nat. Lang. Process. Syst. Demonstr. 2017, pp. 61–66.
2. J. Alammar. “Ecco: An Open Source Library for the Explainability of Transformer Language Models”. In: Proc. 59th Annu. Meet. Assoc. Comput. Linguist. 11th Int. Jt. Conf. Nat. Lang. Process. Syst. Demonstr. 2021, pp. 249–257. url: https://github.com/jalammar/ecco.
3. J. Alammar. The Illustrated GPT-2 (Visualizing Transformer Language Models). Oct. 12, 2019. url: http://jalammar.github.io/illustrated-gpt2/ (visited on 01/24/2021).
4. F. St-Amant. How to Fine-Tune GPT-2 for Text Generation. Medium. May 8, 2021. url: https://towardsdatascience.com/how-to-fine-tune-gpt-2-for-text-generation-ae2ea53bc272 (visited on 07/29/2021).
5. C. Anderson. “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete”. In: Wired (June 23, 2008). issn: 1059–1028. url: https://www.wired.com/2008/06/pb-theory/ (visited on 01/11/2022).
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献