Affiliation:
1. University of Groningen
2. The Alan Turing Institute
3. King’s College London
Abstract
Abstract
Computational methods have produced meaningful and usable results to study word semantics, including semantic
change. These methods, belonging to the field of Natural Language Processing, have recently been applied to ancient languages; in
particular, language modelling has been applied to Ancient Greek, the language on which we focus. In this contribution we explain
how vector representations can be computed from word co-occurrences in a corpus and can be used to locate words in a semantic space,
and what kind of semantic information can be extracted from language models. We compare three different kinds of language models
that can be used to study Ancient Greek semantics: a count-based model, a word embedding model and a syntactic embedding model;
and we show examples of how the quality of their representations can be assessed. We highlight the advantages and potential of
these methods, especially for the study of semantic change, together with their limitations.
Publisher
John Benjamins Publishing Company
Reference31 articles.
1. Graph-based Syntactic Word Embeddings
2. The Ancient Greek and Latin Dependency Treebanks
3. Compass-aligned distributional embeddings for studying semantic differences across corpora;Bianchi,2020