Affiliation:
1. Fudan University, Shanghai, China
Abstract
We propose an unsupervised word segmentation model, in which for each unlabelled sentence sample, the learning objective is to maximize the generation probability of the sentence given its all possible segmentations. Such a generation probability can be factorized into the likelihood of each possible segment given the context in a recursive way. To capture both the long- and short-term dependencies, we propose to use a bi-directional neural language model to better extract the features of the segment’s context. Two decoding algorithms were also developed to combine the context features from both directions to generate the final segmentation at the inference time, which helps to reconcile word-boundary ambiguities. Experimental results show that our context-sensitive unsupervised segmentation model achieved state-of-the-art at different evaluation settings on various datasets for Chinese, and the comparable result for Thai.
Publisher
Association for Computing Machinery (ACM)
Reference35 articles.
1. A neural probabilistic language model;Bengio Yoshua;J. Mach. Learn. Res.,2003
2. A Joint Model for Unsupervised Chinese Word Segmentation
3. Adversarial Multi-Criteria Learning for Chinese Word Segmentation
4. Learning phrase representations using RNN encoder-decoder for statistical machine translation;Cho Kyunghyun;arXiv preprint arXiv:1406.1078,2014
5. Thomas Emerson. 2005. The Second International Chinese Word Segmentation Bakeoff. In Proceedings of the 4th SIGHAN Workshop on Chinese Language Processing.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献