Affiliation:
1. Dpto. Sistemas Informáticos y Computación, Universidad Politécnica de Valencia, Cno. de Vera s/n, 46071 Valencia, Spain
Abstract
Stochastic Grammars are the most usual models in Syntactic Pattern Recognition. Both components of a Stochastic Grammar, the characteristic grammar and the probabilities attached to the rules, can be learnt automatically from training samples. In this paper, first a review of some algorithms are presented to infer the probabilistic component of Stochastic Regular and Context-Free Grammars under the framework of the Growth Transformations. On the other hand, with Stochastic Grammars, the patterns must be represented as strings over a finite set of symbols. However, the most natural representation in many Syntactic Pattern Recognition applications (i.e. speech) is as sequences of vectors from a feature vector space, that is, a continuous representation. Therefore, to obtain a discrete representation of the patterns, some quantization errors are introduced in the representation process. To avoid this drawback, a formal presentation of a semi-continuous extension of the Stochastic Regular and Context-Free Grammars is studied and probabilistic estimation algorithms are developed in this paper. In this extension, sequences of vectors, instead of strings of symbols, can be processed with Stochastic Grammars.
Publisher
World Scientific Pub Co Pte Lt
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献