Author:
Vállez Mari,Pedraza-Jiménez Rafael,Codina Lluís,Blanco Saúl,Rovira Cristòfol
Abstract
Purpose
– Controlled vocabularies play an important role in information retrieval. Numerous studies have shown that conceptual searches based on vocabularies are more effective than keyword searches, at least in certain contexts. Consequently, new ways must be found to improve controlled vocabularies. The purpose of this paper is to present a semi-automatic model for updating controlled vocabularies through the use of a text corpus and the analysis of query logs.
Design/methodology/approach
– An experimental development is presented in which, first, the suitability of a controlled vocabulary to a text corpus is examined. The keywords entered by users to access the text corpus are then compared with the descriptors used to index it. Finally, both the query logs and text corpus are processed to obtain a set of candidate terms to update the controlled vocabulary.
Findings
– This paper describes a model applicable both in the context of the text corpus of an online academic journal and to repositories and intranets. The model is able to: first, identify the queries that led users from a search engine to a relevant document; and second, process these queries to identify candidate terms for inclusion in a controlled vocabulary.
Research limitations/implications
– Ideally, the model should be used in controlled web environments, such as repositories, intranets or academic journals.
Social implications
– The proposed model directly improves the indexing process by facilitating the maintenance and updating of controlled vocabularies. It so doing, it helps to optimise access to information.
Originality/value
– The proposed model takes into account the perspective of users by mining queries in order to propose candidate terms for inclusion in a controlled vocabulary.
Subject
Library and Information Sciences,Computer Science Applications,Information Systems
Reference44 articles.
1. Banerjee, S.
and
Pedersen, T.
(2003), “The design, implementation, and use of the ngram statistics package”, in
Gelbukh, A.
(Ed.),
Computational Linguistics and Intelligent Text Processing
, Lecture Notes in Computer Science: Vol. 2588, Springer, Berlin, pp. 370-381.
2. Beall, J.
(2008), “The weaknesses of full-text searching”,
The Journal of Academic Librarianship
, Vol. 34 No. 5, pp. 438-444.
3. Bird, S.
(2006), “NLTK: the Natural Language Toolkit”, Proceedings of the 21st International Conference on Computational Linguistics, COLING-ACL’06, Association for Computational Linguistics, Stroudsburg, PA, pp. 69-72.
4. Bowen, P.L.
,
O’Farrell, R.A.
and
Rohde, F.H.
(2009), “An empirical investigation of end-user query development: the effects of improved model expressiveness vs complexity”,
Information Systems Research
, Vol. 20 No. 4, pp. 565-584.
5. Estopà, R.
(1999), “Extracció de terminologia: elements per a la construcció d’un SEACUSE (Sistema d’Extracció Automàtica de Candidats a Unitats de Significació Especialitzada)”, Universitat Pompeu Fabra, Institut Universitari de Lingüística Aplicada, Barcelona, available at: www.tdx.cat/handle/10803/7489 (accessed 1 February 2015).
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献