Affiliation:
1. Tecnologico Nacional de Mexico, Culiacan, Mexico
Abstract
The field of natural language processing (NLP) is one of the first to be addressed since artificial intelligence emerged. NLP has made remarkable advances in recent years thanks to the development of new machine learning techniques, particularly novel deep learning methods such as LSTM networks and transformers. This chapter presents an overview of how deep learning techniques have been applied to NLP in the area of affective computing. The chapter examines traditional and novel deep learning architectures developed for natural language processing (NLP) tasks. These architectures comprise recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and the cutting-edge transformers. Moreover, a methodology for NLP method training and fine-tuning is presented. The chapter also integrates Python code that demonstrates two NLP case studies specializing in the educational domain for text classification and sentiment analysis. In both cases, the transformer-based machine learning model (BERT) produced the best results.