Affiliation:
1. Data Science & Artificial Intelligence, University of Hassan II Casablanca, Casablanca 20000, Morocco
2. Teaching, Languages and Cultures Laboratory Mohammedia, University of Hassan II Casablanca, Casablanca 20000, Morocco
Abstract
Word embeddings/representations are an important component of Natural Language Processing (NLP) tasks. Most Neural Machine Translation (NMT) systems that use such representations disregard word morphology by assigning a unique vector to each unique word in the used vocabulary and thus cannot handle Out-Of-Vocabulary (OOV) words. In some languages, such as Arabic, the meaning of words is associated with the meaning of the individual characters that constitute them, as these characters embody internal information. In this study, a combination of character- and word- level models is used to determine the most effective approaches to semantically and morphologically representing affective Arabic words. Furthermore, this work examines the strategy of combining static, character and contextual word embeddings to obtain richer representations for the Arabic Machine Translation (MT) task. To the best of our knowledge, we are the first to investigate the combination of static word embeddings, contextual ones and character-level representation in Arabic MT. Furthermore, a Deep Learning (DL) architecture is employed on data preprocessed by various prominent preprocessing techniques. Various experiments were conducted and the findings indicate that the integration of various models for word embedding and character-level representation is feasible and more effective than the state-of-the-art Arabic MT systems.
Publisher
World Scientific Pub Co Pte Ltd
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献