Abstract
Over several decades many ranking techniques have been proposed as aids to journal selection by libraries. We review those closely related to physics and others with novel features. There are three main methods of ranking: citation analysis, use or user judgement, and size or ‘productivity’. Citations offer an ‘unobtrusive’ quantitative measure, but not only is the absolute value of a citation in question, but also there is no consensus on a ‘correct’ way to choose the citing journals, nor of the ranking parameter. Citations can, however, point out anomalies and show the changing status of journals over the years. Use and user judgement also employ several alternative methods. These are in the main of limited applicability outside the specific user group in question. There is greater ‘parochialism’ in ‘use’ ranking than in ‘judged value’ lists, with citation lists the most international. In some cases, the attempted ‘quantification’ of subjective judgement will be misleading. Size and productivity rankings are normally concerned with one or other formulation of the Bradford distribution. Since the distribution is not universally valid, for library use the librarian must satisfy him/herself that the collection conforms to the distribution, or that his users would be well served by one that did. This may require considerable effort, and statistics gained will then render the Bradford distribution redundant.
Subject
Library and Information Sciences,Information Systems
Cited by
51 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献