Affiliation:
1. Department of Chinese Language and Literature, Tsinghua University , Beijing 100084, People's Republic of China
2. Research Center for Language and Language Education, Central China Normal University , Wuhan 430079, People's Republic of China
Abstract
Ancient Chinese is a splendid treasure within Chinese culture. To facilitate its compilation, pre-trained language models for ancient Chinese are developed. After that, researchers are actively exploring the factors contributing to their success. However, previous work did not study how language models organized the elements of ancient Chinese from a holistic perspective. Hence, we adopt complex networks to explore how language models organize the elements in ancient Chinese system. Specifically, we first analyse the characters’ and words’ co-occurrence networks in ancient Chinese. Then, we study characters’ and words’ attention networks, generated by attention heads within SikuBERT from two aspects: static and dynamic network analysis. In the static network analysis, we find that (i) most of attention networks exhibit small-world properties and scale-free behaviour, (ii) over 80% of attention networks exhibit high similarity with the corresponding co-occurrence networks, (iii) there exists a noticeable gap between characters’ and words’ attention networks across layers, while their fluctuations remain relatively consistent, and (iv) the attention networks generated by SikuBERT tend to be sparser compared with those from Chinese BERT. In dynamic network analysis, we find that the sentence segmentation task does not significantly affect network metrics, while the part-of-speech tagging task makes attention networks sparser.
Reference46 articles.
1. Chan SW . 2016 Ancient Chinese. In The Routledge encyclopedia of the Chinese language, (ed. C Sin-wai ), pp. 1–18. New York, NY: Routledge. (doi:10.4324/9781315675541)
2. Ames RT . 1994 Philosophy of history. In The art of Rulership: a study of ancient Chinese political thought, (ed. MP Semerad ), pp. 1–22. New York, NY: State University of New York Press.
3. A Machine Learning Model for the Dating of Ancient Chinese Texts
4. Capsules Based Chinese Word Segmentation for Ancient Chinese Medical Books
5. Devlin J , Chang MW , Lee K , Toutanova K . 2019 BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL, Minneapolis, MN, pp. 4171–4186.