Information Research, Vol. 6 No. 2, January 2001 | ||||
Bibliometrics is a complex (and controversial) field that I usually try to avoid, and I certainly could not claim any familiarity with the range of background papers cited in the earlier part of this student�s work.
Given that background, you may think it rash for me to leap into debate about the subject of this paper. I am, however, concerned that something so naive as this paper should appear in the public domain, and I feel obliged to draw attention to some of the constraints on its reliability that are not made in the text.
It is based on a list of names of institutions� staff drawn from their web sites. As the authors acknowledge, the date the sample was taken can not necessarily be taken as having provided a current list of staff. More crucially, it is not a requirement that all staff in a department should be submitted. Tactical decisions will be taken about the selection of individuals as 'active researchers' to be submitted in the RAE in the light of a range of factors over and above their publication record. These, for example, might include their 'fit' with the statement of departmental research strategy that is an important part of the overall submission. Equally significant is the fact that in many institutions in the UK, as elsewhere, LIS is now taught by staff managed in multidisciplinary departments. The research outputs of many of those staff will not be submitted to the Library and Information Management Unit of Assessment, but may be submitted for consideration by the Panels concerned with Computer Science, Communication Studies, etc.
The paper also fails to acknowledge the shortcomings of the Citation Indexes, particularly their somewhat idiosyncratic coverage of journals in the field. It is fair to note that they do cover the long-established prestigious journals in our field. However, they also (for historic reasons) cover some publications which would not be recognised as the scholarly journals likely to attract significant research outputs, and a simple count of citations cannot therefore be deemed an appropriate underpinning for a qualitative judgement. Perhaps more significantly, many newer journals which cover emerging fields of information work are not analysed by the Indexes, even though the papers they publish may have to meet quality criteria and refereeing just as rigorous as those of other journals.
Holmes and Oppenheim�s paper analyses citations published since 1994. Whilst the RAE treats Library and Information Management as a "humanities" subject in which researchers are permitted to submit earlier publications, it seems unlikely that any submission in our field will include material published before 1st April 1996, the generally applicable cut off date. The authors provide no justification for their arbitrary selection of this date. It does seem self-evident that the longer a paper has been in print, the more frequently it has the potential to be cited by other researchers in the same field, but the RAE is surely more concerned with the current standing of a department and its staff and the prospects for continued development. These are more likely to be demonstrated through recent publications and through each department's supporting statements evaluating recent progress and setting out future strategy.
Two final points. One is the apparent failure to eliminate self-citation by individual researchers and research teams, even though this is a widely recognised form of academic self-justification, and one known to distort citation counts. The second is the failure to give due recognition to the nature of research in our field and its implications for citation counting. Whilst the authors acknowledge that some sub-disciplines are more likely to be cited than others, they fail to acknowledge that some research (e.g., of an historic nature) is unlikely to ever be cited. It is also the case that, in many of the LIS sub-disciplines, there is no continuity of interest on the part of practitioners, policy makers and research funders. Thus, work that is undertaken and published may almost immediately cease to be of interest to other researchers in the field and this may minimise citation of earlier activity.
It may be that the shortcomings of citation analysis that I have mentioned were edited out of the published version of the paper. As it stands, it demonstrates nothing other than a lack of insight, clarity, and rigour.
I will not try to deny that, when the results of the Research Assessment appear towards the end of 2001, the ranked order may bear some resemblance to those published in Holmes and Oppenheim�s paper. If they do so, it will be coincidence rather than evidence to support the use of citation analysis as a substitute for contemporary peer review.
Ian M Johnson.
Robert Gordon University, Aberdeen
12 February 2001
Johnson rehearses well-known criticisms of citation counting, but misses the point that despite all those criticisms, citation studies have consistently proved to be statistically significantly correlated with other measures of eminence (see, for example, Seng & Willett, 1995). It is also known that self-citations do not affect these correlations (see, for example, Snyder and Bonzi, 1998). The choice of 1994 or 1996 start dates are similarly unlikely to affect the final results we obtained. Johnson's final sentences state that if our results do accurately predict, it was a coincidence; if they don't, it proves his point. It is up to readers to judge the validity of this argument for themselves, and to judge how many times citation studies are shown to be correlated with measures of eminence before they accept there is some connection between the two.
Professor Charles Oppenheim
Dept of Information Science
Loughborough University
Loughborough
Leics LE11 3TU
Contents |
Home |