Abstract
Paper describes methods of dimensionality reduction widely used in artificial intelligence in general, and in computer linguistics in particular, such as Non-negative matrix factorization and Singular value decomposition from the point of use in methods of Latent Semantic Analysis and Method of Principal Components. Advantages and disadvantages of each method are given. The computational complexity was investigated and a comparison of performance on dense and sparse matrices of different sizes was made. It is proposed to use them to reduce the dimensionality also of multidimensional linguistic data arrays.
Publisher
National Academy of Sciences of Ukraine (Co. LTD Ukrinformnauka) (Publications)