Impact factor and other indices to assess science, scientists and scientific journals, K Satyanarayana

Tags: impact factor, impact factors, citation data, citation-based, Satyanarayana K, parameters, learned journals, journal editors, India, Satyanarayana Indian J Physiol Pharmacol, Google, medical journals, citation indices, Google Scholar, World Association of Medical Editors, Thomson Reuters, Latin American Journal of Pharmacology, scholarly journals, Los Alamos National Laboratory, Indian J Med Res, Jorge E. Hirsch, Eigenfactor, Indian medical journals, indices, Satyanarayana K. India, impact factor journals, science academies, Universities in India, dubious quality, Thomson-Reuters, Journal Citation Reports, computer science, data resources, Johan Bollen, Balaram P. Metrics, Faculty of 1000, Fukui T. Comparison, Carl Bergstrom, scientific research, journals publishing, Research Assessment Exercise Team, scholarly communication, citation rate, scientific journals, Indian Journal of Pharmacology, research papers, Bollen J. Refining
Content: Indian J Physiol Pharmacol 2010; 54 (3) : 197­212 INVITED ARTICLE IMPACT FACTOR AND OTHER INDICES TO ASSESS SCIENCE, SCIENTISTS AND SCIENTIFIC JOURNALS K. SATYANARAYANA* Editor, Indian Journal of Medical Research, Indian Council of Medial Research, Ansari Nagar, New Delhi ­ 110 029 ( Received on March 2, 2010 ) Abstract : This paper traces the evolution of measures and parameters for the evaluation of science and scientific journals from the first attempts during the early part of the last century to the development of the most popular, current and widely used metrics viz., citations, impact factor (IF) etc. The identification of measures of evaluation in science and scientific reporting paralled the post-war increase in funding in the United States of America. Biomedical and medical sciences continue to garner a major share, estimated to be almost two-thirds of total research and development funding of over US$ 350 billion. There has been a concomitant growth in the publications in learned journals. About 1.4 million papers are published every year in an estimated 20,000 journals. In India there are an estimated 100 journals in medical sciences. With a steady increase of about 10% every year, the competition for grants, awards, rewards etc., is fierce. This unrelenting increase in number of scientists and the resultant competition, the limitation of Peer Review was felt. A search was on for new quantifiable measures for informed decision making for funding, awards, rewards, etc. Now virtually all major decisions all over the world are based on some data linked to publications and/or citations. The concept of citations as tool for `evaluating' science was first proposed by Eugene Garfield in 1955. The availability of Science Citation Index (SCI), Journal Citation Reports (JCR), Web of Science etc. and the relative ease with which they could be used (and abused) has spawned an entirely new area bibliometrics/scientometrics. As only a limited number of journals could be included in the Thomson Reuters (TR) databases (currently numbering about 10500), analyses based on such a limited dataset (also selected in a non-transparent way by the TR) has been widely and severely criticized by both the developed and Developing Countries. Yet, studies have shown that citation-based data and indicators (warts and all) could still be put to productive use for purposes of evaluation (as scientists just love numbers). There were simultaneous efforts to find alternative indicators using the TR databases, and through other innovative methods. Some of these include Google Scholar, PageRank, H-index, Y-factor, Faculty of 1000, Eigen Factor etc. The advantages and limitations of these indices are discussed. There is a need for a more *Corresponding Author : [email protected]
198 Satyanarayana
Indian J Physiol Pharmacol 2010; 54(3)
critical look at these parameters from the Indian perspective to compute/ device/adapt these measures to suit our needs. There are 205 journals under the category Physiology and 201 in the Pharmacology category listed in the JCR. There are four major Indian journals under the category of Physiology and Pharmacology and none of them are listed in the TR databases reflecting the limitation of these databases. Eventually, and in the long run, the quality of our journals needs to be improved as the current era of globalization and web-access provides both a challenge and an opportunity for the science and scientific journals published from India to get increased global visibility.
Key words : impact factor
scientific journals
INTRODUCTION Science is an elite activity, intellectintensive, curiosity-driven, highly competitive, often done with passion, zeal and enthusiasm bordering on obsession. The enthusiasm of scientists to create new knowledge perhaps can only match their desire to be recognized as the best by peers, and the society. It is this constant game of intense competition and one-upmanship with everyone trying to score over everyone else that set off the eternal race to ahead of others. Recognition by peers has always been an integral component of doing science through various means accepted over a period of time. The most common method widely accepted continues to be anonymous review by experts for independent verification, in which often competitors get to judge each other. With multiple reviews simultaneously done, the issue of bias usually gets addressed as the review is got done by editors who ensure objectivity. But as science grew big and competition for grants and other recognitions became fierce, there were increasing murmurs of bias in what should otherwise be an objective evaluation, with some justification. When such criticism became more wide spread, an earnest search began for more objective quantifiable tools and
techniques, to measure science and scientists to supplement and/or substitute peer evaluation to distinguish outstanding from average, mediocre from excellent. By very nature, scientists are most comfortable with numbers and were looking for such figures that sound not just sound but appear less subjective. In addition, policy makers and grant-giving bodies also wanted to be doubly sure that the peer evaluation system remains as objective as possible and not dominated by select few cartels of senior scientists. The earliest such measures of evaluation looked at the learned journals where research gets published. Journals were considered as surrogate indicators of quality the premise being: i) all research, at least the interesting and useful component, eventually gets reported in learned journals in some form and somewhere; ii) papers in journals can be trusted as the content is reviewed by anonymous experts and therefore could be expected to be fair; and iii) there is a broad consensus among researchers of the quality of journals in their areas and a general pecking order of the bunch of `best' journals in their areas of interest. Papers in learned journals therefore got recognized to be important, perhaps, only readily quantifiable means of evaluating science and scientists, warts and all. But this was fine
Indian J Physiol Pharmacol 2010; 54(3)
Impact Factor and other Indices to Assess Science 199
when science was small, scientific journals few and almost everyone knew who is doing what and when, and where important action is occurring in their area(s). The scenario drastically changed beginning with the 1970s. There was a global surge in both doing and reporting science due to: i) huge influx of tax payers' money into research and development, especially in the US, supplementing the increasing industry support to the universities and other institutions of higher learning. For example, during 2008­09, public funded agencies globally invested close to US$ 350 billion, a major portion for research on biomedical science including medicine; ii) huge output of the money put in ­ about 1.4 million papers published every year over 760,000 in 2009 (460,000 in 1988) in about 10,500 journals indexed in the Web of Science database; iii) steady growth of scientific literature of about 10% every year; iv) starting of new specialty and super-specialty journals in virtually every discipline; v) increased interdisciplinary research etc. Suddenly, even scientists who were keeping track of new developments with relative ease found that there are just too many papers, and journals. This literature explosion led to more determined efforts to ease the pressure on scientists, readers and librarians to find means of sifting the relevant from the mountains of papers to facilitate quick and ready access to relevant information. Efforts were also on simultaneously to find rapid machine-driven systems of analysis and retrieval of documents through better means of archiving, digitization etc by the global abstracting and indexing services like the Index Medicus. It was some very enlightened librarians who found a way of choosing the `best' journals among those available (1).
Their purpose was limited ­ to judiciously use the shrinking subscription money to get best value for money. They conceived a system of ranking the journals according to `quality' based on the use of journals by readers. Their hypothesis was simple. All papers published in journals are read and used for furthering their work by researchers. More the use of a piece of work, higher is its chances of utilization. A parameter of use that is universally accepted by the scientific community is through citations in their papers. A publication cited more frequently implies that that research contained in that paper is `significant' and relatively more important than other papers. Using the same logic, journals that contain large number of such papers can also be considered important. Gross and Gross (1) were among the first to recognize this and systematically apply citation analysis to know which journals to subscribe based on their `usage' as reflected by the frequency with which journals were cited (1). The US Government also simultaneously funded projects to devise systems to automate to quickly retrieve relevant information from the burgeoning literature. As an offshoot of one such project, an information scientist, Eugene Garfield conceived the Impact Factor (IF) in 1955 (2). Garfield realized that when scientists write papers, they rely heavily on earlier published research for several reasons like to substantiate their point, add credibility to the statement made and also to acknowledging intellectual debt. The primary purpose of conception and application of IF by Garfield was to help select a bunch of important (significant) journals based on the citation patterns for what would eventually become the Science Citation Index (SCI) (2, 3). Simultaneously, librarians, scientists and scientific editors started the citation-based indices and were
200 Satyanarayana
Indian J Physiol Pharmacol 2010; 54(3)
also on the look out for new `performance indicators' besides the IF of journal for their own reasons. (4). Impact factor : The IF is a simple descriptive quantitative measurement of a journal's performance computed on the basis of the average number of times articles from the journal published in the past two years have been cited in the current year. It is calculated from this equation: Journal X's 2009 impact factor = Citations in 2009 (in journals indexed by Thomson Reuters) to all articles published by Journal X in 2007­2008 divided by Number of articles deemed to be "citable" by Thomson Reuters that were published in Journal X in 2007­2008. The journal IF is currently calculated by Thomson Reuters based on citation data from the 6650 plus journals indexed in the Web of Science database, which is then reported in the Journal Citation Reports (JCR), a database that lists the journals as per their citation ranking. Impact of impact factor Ever since the appearance of the JCR in 1972, there has been attempts to use the IF data for comparisons of science, scientists, groups of scientists, scientific disciplines, countries and, of course, scientific journals (4­6). The IF is primarily meant to be an indicator of the success of a paper in a journal and a surrogate of its direct application in subsequent research. Such wide and indiscriminate application of IF and citation data often resulted in lopsided and unacceptable quality judgements, especially on the science and technology capability and strengths of nations led to severe and serious criticism of the very use of citation-based data for purposes other than journal
evaluation. Most of the criticisms were also valid. More importantly, the realization that citation data are being used to seek answers questions that these data never intended to answer led to the clearer understanding of the limitations of the tool itself. Despite wide and sustained criticism, citation data and IF continued wide application by researchers to choose journals for reading and referencing and more importantly, tracking rivals' publications and citation profiles to remain competitive (7). Journal editors and publishers just love impact factors and they use the IF as a major USP for pricing and selling the journals at their will (7­9). Librarians continue to rely on impact factors and other citation data for deciding which journals to subscribe (7). Potential employers use citation-based parameters to evaluate candidates' bibliography for decisions of hiring. Many institutions and Universities all over the world continue to use the citation data for assessment of academic excellence, promotions, awards and rewards. Funding agencies also seek citation indices from applicants to evaluate projects for support. Learned societies and national science academies and other such bodies conferring awards and rewards all over the world, including India use citation data for decision making (4, 8). With such a wide and increasing demand for citation-based indices, a new area called bibliometrics (scientometrics) has emerged that involves quantitative analyses and statistics of publications in learned journals and their citations in the SCI/WoS. There are several journals exclusively publishing papers in these areas, several Universities in India and abroad offer degrees in these areas. In brief, bibliometrics/scientometrics has firmly entrenched in our system, and has come to stay.
Indian J Physiol Pharmacol 2010; 54(3)
Impact Factor and other Indices to Assess Science 201
Limitations and abuse of impact factor From an academic and editorial perspective, a major limitation of the IF continues to be its inability to distinguish between popularity and prestige of scientific and medical journals. For the computation of the IF, all citations are considered equal, be it from The Lancet (IF 18.32) or Latin American Journal of Pharmacology (IF 0.141). Furthermore, for the Thomson-Reuters, the IF is the final indicator of `quality' of a journal even while there is near global consensus that IF cannot be considered the only parameter of quality. Also, there is no accepted correlation between IF and peer acceptance of quality of a journal. Therefore, not all high impact journals are not be considered the best by the peers and vice versa (7, 10). There are several journals, especially from outside North America (which contributes the maximum number of journals to the SCI/JCR) that are considered as good as or better than those in these databases. The Thomson Reuters refuses to even acknowledge the problem much less try and address the issue. The IF only tells as to how frequently a paper is cited, whatever the reason for citation leaving scope for biased citation profiles. Papers of dubious quality proven to be fraudulent continue to be cited as are papers which report negative and incredible data. Like IF does not say anything of the `influence' of a particular piece of work - how it is read and discussed outside the core scientific community or its impact on national policies. It is also known that clever editors can and have successfully manipulated the IF (See below). Even Garfield opposed the journal IF being used as a surrogate of `quality' of science being reported. (3). The ideal system of unbiased evaluation of research work should be based on actual reading of each article for quality
by subject experts. While research over the last three decades has improved our conceptual understanding of the IF, how it has been (and should be) applied for purposes of evaluation not many substantial steps have been taken until recently. And this is not because the Thomson Reuters want to improve SCI/JCR but due to the emergence of a competitive tool for citation tool called Scopus from Elsevier. Some well known limitations and deficiencies with the citationbased evaluation systems have been identified some listed below. 1) The overall policy of the Thomson Reuters about the way journals are selected for inclusion in their databases continues to be nontransparent. Although a journal accepted for inclusion is supposed to be `recommended' by experts chosen by the publisher ostensibly based on bibliographic and bibliometric characteristics, Thomson Reuters refuses to share details of the process of selection. While journals from established publishing houses as Nature are born with IF, several frustrated editors from India and elsewhere do not have a clue as to why their journals do not find place in the SCI. No reasons are given by the company. Even the World Association of Medical Editors and several editors from the US have had frustrating experience of the stonewalling tactics of the company in respect of inclusion criteria of journals into their databases. 2) Methods used to collect citations at the Thomson Reuters databases are still unclear like which types of articles are included (numerator) in
202 Satyanarayana the IF calculation. Items considered to be included as numerator are chosen by the staff of Thomson Reuters ostensibly using various bibliographic criteria, such as keywords and number of references. There are allegations galore that publishers in consultation with the company do manage to manipulate the numerator/denominator data leading to sudden jump in IF (11). This has widely been reported by journal editors (4, 12). 3) The algorithm used to calculate the IF is still unclear as the logic of using two year data has been questioned repeatedly and various other options for calculating have suggested based on more rational and logical criteria (7). This time-constraint of two years is considered sub-optimal. It is well known that citation rates widely vary across scientific disciplines and within a discipline, areas and sub-areas. Scientists in life sciences, especially cutting edge areas in immunology, molecular biology, genomics, stem cell etc. cite extensively while other classical areas in life sciences, mathematics etc. have low citation rates affecting the impact factors and other indices. There are many important papers which did not receive citations within the two year period. They miss out being `significant' papers according to the Thomson Reuters criteria. 4) Citation behavior of scientists also varies significantly between disciplines with basic scientists citing intensively as compared to, say, public health researchers making comparisons
Indian J Physiol Pharmacol 2010; 54(3) difficult and irrational. Yet, comparative analysis is routinely done often with cumulative IF even while the Thomson Reuters actively discourages such comparison of apples with oranges. 5) Journals available online tend to get more frequently cited (13). Studies with data set from the MEDLINE database have shown that journals which offered full text free access tend to have higher IF. Not surprisingly, many `open access' journals published by BioMed Central (BMC) and the Public Library of Science (PLoS) have good IF despite not having a hard copy versions severely restricting their wide use. As of now there are no means of comparing IF of print-only or print and pay-to-view journals which form most of journals from private publishers. 6) Invalid, fraudulent and retracted articles continue to be cited and counted for computing the IF defeating the very purpose of associating citation indices with quality of research reported. Like several retracted articles like the stem cell papers by Woo Suk Hwang in Science (2004­2005) were cited as many as 419 times (until November 20, 2007) (11, 14). And although John Darsee's research work on cardiology was long proven to be fraudulent and retracted from the PubMed system, his papers continue to be cited in medical literature. There is no system of linking indication of retraction in the original publication as typically it often takes years for a retraction is published. With so many papers being published and with severe time constraints many scientists may
Indian J Physiol Pharmacol 2010; 54(3) not just have the time to look at retractions. Also, many Young People may not be aware of track records of the likes of Darsee that occurred in the 1980s. 7) Sometimes controversial papers get cited only to negate the content subsequently. Editors therefore may well have knowingly published such publications just to boost citations. 8) Some editors even from very prestigious journals are known to prefer only potentially highly citable science that enhances the IF of their journal (15). Some editors like Nature's John Maddox had a clear preference to citation-intensive papers as compared to less potentially citable high quality science (15). Many therefore actively manipulate and manage the content of the journal only to boost the IF ­ also indicted as `impact factor game' (12). 9) Editorial preference for longer articles seems to increase a journal's IF is also known (5). It has been shown that the citation rate is proportional to the article length, i.e. longer articles are cited more often (7). 10) It is well know that review journals enjoy the highest IF and review articles are surely not superior to papers reporting original research. One well known means of boosting the IF therefore is to publish more review articles, especially in such emerging areas as genomics, stem cells etc. and such data intensive articles even while underrepresented areas would benefit from more review
Impact Factor and other Indices to Assess Science 203 papers as they would not fetch citations subsequently. 11) There is known bias of the SCI towards US journals and US-centric science with wide representation of journals from North America in the Thomson Reuters databases. Not surprisingly, it has been found that the IF of US-based journals are about 30% more than those of other countries due to a higher frequency of self and colleague citations (5). 12) By its very nature of calculation, the IF cannot assess the quality of individual articles. It is also known that a small number of papers even in the best known journals like Nature, Science, New England Journal of Medicine account for a disproportionate number of citations. Which means that the high IF of a journal is not high because of many uniformly better `quality' papers, but a few that has managed to attract very large citations. Therefore, a typical article in a high IF journal is unlikely to cited more frequently than an average article in a journal with relatively low IF. 13) Subjects and areas that are less `fast moving' (less fashionable) tend to have fewer citations as also journals with lesser frequency irrespective of the quality and importance of such areas or application to human health. Like the number of references for a typical paper in mathematics, is about 25% of an average biochemistry journal (16). Biochemistry in turn will be disadvantaged by papers in cutting edge, fast moving, citation-intensive disciplines as genomics and stem cell research.
204 Satyanarayana
Indian J Physiol Pharmacol 2010; 54(3)
14) There is a proven correlation between high citation rate and its subsequent `impact' towards to development and public good of the science being reported. In health sciences at least, there is evidence to show that even papers from low impact journals could contribute to clinical evidence to decision making in public health (17) negating the argument that only highly cited papers contribute to progress of science and medicine. 15) In addition, there could also be (and there are) so called citation cartels in which established scientists cite their own and those within their group that would artificially boost IF of their papers and journals publishing such work. This is fairly common in fast moving areas of science with intense competition. 16) The IF also does not take into consideration editorial content other than research papers. In medical journals, editorials, Case Reports, commentaries etc. often contain significant leads for future work often cited by subsequent original papers. 17) Beginning with the 1990s, due to increasing recognition of the trend towards wide and indiscriminate use of IF for national assessment and policy making exercises especially outside US several steps have been initiated. For example, the serious criticism by the scientists in the United Kingdom, led to the British government taking substantive measures among which not to use not to use journal IF as proxy measure for assessment of research in the UK (18).
Alternatives to impact factor Right from early 1970s, there have been serious attempts to study the limitations of IF and other citation-based indices and to device alternative matrices that can address the deficiencies to make the evaluation exercises more objective. As early as 1976, a recursive impact factor and tried to compute and analyze citation data to give citations from journals that have high impact greater weight than citations from lowimpact journals was proposed (19). The increasing web-based access to and use of scholarly literature through powerful search engines as Google has facilitated the development of innovative methods and tools to rank scholarly journals. Such methods have helped further refine the evaluation of both science and scientists both within and outside the citation-based systems. Some of these include Page Rank, Weighed Page Rank, h-index, g-factor, y-factor, Euro Factor, Faculty of 1000, Eigenfactor etc. (See box). There have also been several attempts to apply parameters other than IF to study the issue of `popularity' vs `prestige' of journals, a major limitation of the IF and other citation-based indices. Many studies have also been done to compare the citationbased data with the new and improved methodologies. One such comparative analysis has shown that Y-factor ranking has helped overcome at least one significant limitation of the IF i.e., the higher ranking of review journals as compared to original research papers (4). Some newer methods of evaluating a research paper envisage switching over to so called community-based measurements like tracking article downloadings, for both papers from journals that are open access or otherwise. Attempts have also been made
Indian J Physiol Pharmacol 2010; 54(3)
Impact Factor and other Indices to Assess Science 205
to support and improve the peer review systems to overcome its well known limitations with the newer methods and concepts. A new concept called `recommender system' has been mooted to expand the peer review process (20). Unlike the conventional peer review which is done anonymously (and hence prone to bias), this system envisages posting the content of the reviewers' comments on the journal web site, if possible with names of experts. Journals like the PLoS One from the PLoS stable is already trying out this new model of open review system with varying rates of success (21). Papers accepted for publication in this open access journal must clear the initial review process of an editor. Significantly, most of the review and rating of the paper gets done publicly, post-publication. The journal provides facility for such ratings based on factors as insight, reliability and style with scope for readers to post comments. But this system can only succeed if both the authors and reviewers show willingness to participate. Authors should be willing to part such systems of open criticism and readers should have the interest and courage to candidly post signed comments on others' published work with their names disclosed. Journals in physiology and pharmacology covered by the Journal Citation Reports There are only four major Indian journals in the areas of Physiology and Pharmacology (Table I). TABLE I : Indian Journals in Physiology/Pharmacology. 1. Indian Journal of Pharmaceutical Sciences 2. Indian Journal of Pharmacology 3. Pharma Times 4. Indian Journal of Physiology and Pharmacology Source:http://www.indianjournals.com/ijor.aspx? target=indianjournaldirectory.
There were 205 global journals under the category physiology the top 50 shown in Table 2. The impact factor in 2008 varied from 35.0 (Physiological Reviews) to Japanese Journal of Physical Fitness and Sports (0.007). There are no journals from India in the SCI/JCR. In the area of pharmacology there were 201 journals listed in the 2008 JCR the top 50 are shown in Table III. The IF varied from Nature Review of Drug Discovery (28.69) to Latin American Journal of Pharmacology (0.141). Again there are no journals from India in the SCI/JCR. The future It is obvious that scientists' quest for new tools and technologies for evaluation will not and, perhaps, should not end. But the most worrying part in this endeavour is the attitude of various major stakeholders of the scholarly communication system ­ the authors, the journal editors and the readers towards the citation-based indicators. Many display a clear ostrich like attitude ­ knowing and ignoring the now well known limitations but choosing to selectively apply to suit their convenience. The authors are and will be the key players as they are mainly responsible for the rise or fall of the IF of a journal by choosing to patronize only some journals. Authors from developing countries including India often put up with humiliation of multiple revisions and modifications just to publish in such journals while there are options available elsewhere including from their own countries (22). Developing country scientists get hooked to the citation nose-thread very early in life and continue to be led almost all their lives. While the anxiety of youngsters is
206 Satyanarayana
Indian J Physiol Pharmacol 2010; 54(3)
TABLE II : Top 50 journals in physiology with impact factor (2008).
Journal Name
IF 2008
1. PHYSIOL REV 2. ANNU REV PHYSIOL 3. PHYSIOLOGY 4. REV PHYSIOL BIOCH P 5. J PINEAL RES 6. J PHYSIOL-LONDON 7. J CELL PHYSIOL 8. Am J Physiol-CELL PH 9. J BIOL RHYTHM 10. J MAMMARY GLAND BIOL 11. AM J PHYSIOL-LUNG C 12. AM J PHYSIOL-RENAL 13. AM J PHYSIOL-ENDOC M 14. J Appl Physiol 15. J NEUROPHYSIOL 16. AM J PHYSIOL-HEART C 17. AM J PHYSIOL-GASTR L 18. PFLUG ARCH EUR J PHY 19. ACTA PHYSIOL SCAND 20. CHRONOBIOL INT 21. PHYSIOL GENOMICS 22. PSYCHOPHYSIOLOGY 23. AM J PHYSIOL-REG I 24. CELL PHYSIOL BIOCHEM 25. CHEM SENSES 26. J BIOL REG HOMEOS AG 27. STRESS 28. EXP PHYSIOL 29. J VASC RES 30. EXERC SPORT SCI REV 31. J PHYSIOL PHARMACOL 32. PHYSIOL BIOCHEM ZOOL 33. ACTA PHYSIOL 34. J PHYSIOL-PARIS 35. J MEMBRANE BIOL 36. REGUL PEPTIDES 37. INT J PSYCHOPHYSIOL 38. CLIN EXP PHARMACOL P 39. J INSECT PHYSIOL 40. INT J BIOMETEOROL 41. RESP PHYSIOL NEUROBI 42. J COMP PHYSIOL A 43. EUR J APPL PHYSIOL 44. J ELECTROMYOGR KINES` 45. CAN J PHYSIOL PHARM 46. Q J EXP PSYCHOL 47. CRYOBIOLOGY 48. COMP BIOCHEM PHYS A 49. J COMP PHYSIOL B 50. PHYSIOL MEAS
35.00 19.74 7.159 5.867 5.056 4.605 4.313 4.23 4.211 4.167 3.924 3.89 3.855 3.658 3.648 3.643 3.587 3.526 3.525 3.495 3.436 3.318 3.272 3.246 3.041 2.957 2.952 2.91 2.792 2.649 2.631 2.532 2.455 2.396 2.32 2.276 2.264 2.196 2.155 2.084 2.035 2.014 1.931 1.884 1.763 1.76 1.721 1.709 1.698 1.691
TABLE III : Top 50 journals in pharmacology with impact factor (2008).
Sl. No. Name of Journal
1
NAT REV DRUG DISCOV
2
PHARMACOL REV
3
ANNU REV PHARMACOL
4
ANNU REP MED CHEM
5
PHARMACOL THERAPEUT
6
TRENDS PHARMACOL SCI
7
MED RES REV
8
ADV DRUG DELIVER REV
9
CLIN PHARMACOL THER
10
DRUG RESIST UPDATE
11
NEUROPSYCHOPHARMACOL
12
DRUG DISCOV TODAY
13
CURR OPIN PHARMACOL
14
REV PHYSIOL BIOCH P
15
J CONTROL RELEASE
16
DRUG METAB REV
17
PHARMACOGENOMICS J
18
BRIT J PHARMACOL
19
BIOCHEM PHARMACOL
20
CURR MED CHEM
21
ANTIMICROB AGENTS CH
22
MOL PHARMACOL
23
CNS DRUG REV
24
PHARMACOGENET GENOM
25
CURR PHARM DESIGN
26
INT J NEUROPSYCHOPH
27
J CLIN PSYCHOPHARM
28
CURR DRUG METAB
29
J ANTIMICROB CHEMOTH
30
J PHARMACOL EXP THER
31
CLIN PHARMACOKINET
32
CURR OPIN DRUG DISC
33
CURR DRUG TARGETS
34
DRUGS
35
CARDIOVASC DRUG REV
36
ANTIVIR THER
37
J PSYCHOPHARMACOL
38
EXPERT OPIN INV DRUG
39
EXPERT OPIN THER TAR
40
PHARM RES
41
DRUG METAB DISPOS
42
CNS DRUGS
43
PSYCHOPHARMACOLOGY
44
EUR NEUROPSYCHOPHARM
45
EUR J PHARM SCI
46
ANTIVIR RES
47
CURR VASC PHARMACOL
48
PHARMACOGENOMICS
49
DRUG SAFETY
50
INVEST NEW DRUG
IF 2008 28.69 21.936 21.561 10.985 9.443 9.34 8.907 8.287 7.586 7.206 6.835 6.618 6.528 5.867 5.69 5.622 5.435 4.902 4.838 4.823 4.716 4.711 4.432 4.409 4.399 4.378 4.371 4.35 4.328 4.309 4.206 4.205 4.187 4.128 4.114 4.105 4.063 4.058 4.038 4.024 3.835 3.78 3.676 3.661 3.65 3.613 3.582 3.551 3.537 3.396
Indian J Physiol Pharmacol 2010; 54(3)
Impact Factor and other Indices to Assess Science 207
understandable as they need to get settled in life, the attitude of very senior authors to willingly subject themselves to the vagaries of editors of high IF journals, despite their public display of support to national journals (15) is not a good sign for developing countries. There are lessons to be learnt from China, emerging as a global player that is actively trying to come out the shackles of citations (23). India which has been at the forefront global of discussions of the citation-based analyses for over two decades is yet to initiate any substantive measures to overcome the limitations of the citation-based indicators from the developing country perspective. This, despite substantial number of experienced researchers in the area of bibliometrics/scientometrics in India. And the continued efforts of many scientists and policy makers of such a need. Editors are equally, if not more, guilty for allowing the continued propagation of the IF and citation-based data as major barometer of quality. They are always striving to improve the quality of their journals to seek best research and of course to get new subscriptions and advertisements for ensuring financial viability. Controversial it may be, IF happen to be the only widely accepted quantifiable tool available to-date, despite other indices available (4, 24). Also, higher IF can help retain the authorship, readership and subscriber-base which continue to give some respectability to the citation-based indices (4, 15). A recent survey by the editors of the Journal of Royal Society of Medicine (JRSM) has confirmed the obvious. Journal editors do woo and pamper researchers with excellent citationtrack record, to send in their potentially highly citable research. Even trenchant critics as the editors of PLoS Medicine
candidly admitted that, whatever journal editors may say in private, IF does matter (15). Not surprisingly, editors eagerly look forward to Spring season every year when the citation data on journals are released by the Thomson Reuters. It is also well known that many editors have been resorting to both fair and not-so-fair means to boost the impact factors of their journals (4, 15). But for journal editors from the developing countries, it is a clear dilemma of the stand they should take on the IF and other citationbased indices as many of them do not even figure in the SCI/JCR lists. And those editors, whose journals have IF, constantly try hard to at least reach the single digit from a fraction. While we are yet to get to the `best' metric for judging a journal, there have been sustained and serious global efforts to address the issue critically. However, the continuing global debate on citation-based indices by editors, authors, policy makers etc. is at once both comforting and worrying. Comforting as editors worldover are seized of the problems associated with the application of citation data for purposes of judgment. They have contributed to the ongoing serious efforts to address the issues through debate through columns of journals as also encouraging computation of other parameters as Eigenfactor to overcome the limitations of IF (4, 25). But it is a constant source of worry for me as despite over three decades of intense debate in India of the limitations et al of the citation-based indices, the overall awareness about the IF among Indian medical journals editors continues to be limited and has not improved greatly. The World Association of Medical Editors (WAME), for example exhorts journal editors to actively look for parameters beyond the IF like circulation, number of
208 Satyanarayana
Indian J Physiol Pharmacol 2010; 54(3)
published articles, the distribution of the citations etc. (26). The WAME also is currently engaged in generally sensitizing member editors of the limitations and relevance of IF and other citation-based parameters primarily to help them understand issues of quality of learned journals. Another important question that has been bothering almost everyone doing, reporting and publishing science in India and elsewhere in the world is whether the IF is good or bad. It is indeed tough to judge quality of academic research on its potential impact on science and its progress based on the IF and citation-based parameters alone. It may well be a moot question as with the kind of obsession scientists have towards quantifiable parameters (`numbers') as compared to good old peer review, they would have surely discovered some other factor. In fact such exercise is on for long. The latest parameter is the Eigenfactor (27). One can be sure that more are in the pipeline. Like many measures of quality in life, the IF, can be used (wisely and judiciously) and abused (often with disastrous consequences). To be fair, Garfield, since the time he has proposed the citation-based indices in 1955, has consistently cautioned of the potential misuse/abuse of citation data and that a deeper understanding of the citation process is required that was reiterated by several scholars over the last four decades. Looking back over the time since the citation-based measures were discussed, we may have expected too much out of these numbers not devised to seek answers that the data was not able to provide. As Garfield himself said in the classic 1955 paper (2): "The new bibliographic tool, like others that already exist, is just a starting point in literature
research. It will help in many ways, but one should not expect it to solve all our problems". So most of the problems are not with the data, but the people who have applied in situations that do not warrant such an application. At a national level, it would therefore be more appropriate for India to use the citation-based parameters most judiciously. That citation-based parameters should be used only to supplement peer evaluation as the intricacies of citation data are understood by so few in India has been repeatedly argued by journal editors, among others (4, 8, 12). Often these data are used in the most inappropriate context leading to seemingly ridiculous conclusions (4). "Not many parameters can match IF for its propensity for sustained misuse/abuse. May be due to the glamour and simplicity of use, IF and other citation-based parameters are the only `numbers' widely used in seemingly unbiased decision making" (4). Unfortunately, situation at the ground level does not seem to have changed much in India despite vehement arguments for the prudent application of citation-based indicators (4, 8, 24). Harsh it may sound but may be we have yet to get over the colonial hangover of over dependence and ready acceptance of rules set elsewhere. Only the rule setters have changed ­ Americans replacing the British. Instead of worrying too much about citationbased indicators that anyway few understand and of little relevance to our country, we should perhaps divert our energies to focus on improving their quality and standards of our journals, More importantly, ensure reporting of good Indian research in Indian journals and then provide access to such research to the global biomedical community.
Indian J Physiol Pharmacol 2010; 54(3)
Impact Factor and other Indices to Assess Science 209
Indices/parameters other than the Impact Factor
Several indices to evaluate science, scientists, scientific journals have been developed using data with and without citation-based indicators. Google Scholar- Google Scholar (http://scholar.google.com) is a free-to-use search engine developed in 2004 essentially to locate information from learned journals and other sources on the Web. Google Scholar searches all web-based information ­ peer reviewed literature from scholarly journals as also non-reviewed material like books, abstracts, technical and other reports etc. Due to its easy availability, Google Scholar is perhaps one of the most widely used tools by scholars in all disciplines of science and technology. In view of its excellent search and locate ability, Google Scholar is reportedly used even more frequently than even the PubMed, the most popular database in biomedical sciences from the National Library of Medicine, Bethesda, USA. Some special functions of the Google Scholar include the `cited by' option that provides links to other articles that have cited this paper, and more. In view of its excellent search capability, GS can search and also locateand-find anything available on the web. The `hits' therefore are very large, creasing a problem of plenty. Also, the display of records is decided by the search engine not necessarily according to the user-needs. It is often difficult to obtain relevant information quickly due to absence of sifting according to quality. The major limitations of the search engine are that not all records retrieved are peer reviewed and therefore quality is difficult to judge. Also, there is lack of clarity on how the sources themselves are selected, content analyzed, the time span covered
how the listing is done. For a scholar looking for credible information, the absence clarity on the sources and the order of display remain major deficiencies, and irritants. PageRankTM PageRank is a software system for ranking web pages developed by Google and has also been applied to rank research publications. The advantage with this tool is that it uses a broad range of open data sources from the Google Scholar (GS) etc. that can locate and retrieve large number of records. The coverage is more comprehensive than the Web of Science, a pay-to-use costly database of about 6500 journals of the Thomson Reuters. But the PageRank algorithm addresses is the issue of `popularity' and expert appreciation or `prestige' of published research that remains the major limitation of other databases like SCI through the Weighed PageRank. Popular journals are those that are cited frequently by journals could be with little prestige. These journals therefore could have a very high IF and a very low weighted PageRank. Prestigious journals, on the contrary, are those may not be frequently cited, but their citations come from highly prestigious journals. These journals may have a very low IF but a very high weighted PageRank. Thus, the weighted PR algorithm can be used to obtain a parameter that reflects prestige. Analysis of journals according to their ISI IF and their weighted PageRank shows significant overlaps and differences (See 4). Weighted PageRank for journals while strongly overlapping with the ISI IF also shows certain significant and meaningful discrepancies. PageRank may not get added to the parameters for assessment but is expected to change our perception
210 Satyanarayana
Indian J Physiol Pharmacol 2010; 54(3)
of status and the manner in which scholarly search results will be ranked by Google, Google Scholar and its competitors (28, 29). H-index- This index was proposed by American physicist Jorge E. Hirsch to compute scientific impact by studying scientific productivity and research impact (25). Based on the most cited papers of a scientist, one can also map out impact of groups and institutions as Hirsch Index or Hirsch number. H-index is computed through a plot based on the distribution of citations received to the publications of a scientist. For example, an author with an h index of 30 has published 30 papers that received at least 30 citations in subsequent work. If one scholar has published 100 papers and another has published 35, but both have 30 papers that received at 30 citations, each would have an h index of 30. Hirsch argues that his index provides a way to gauge the relative accomplishments of different researchers that is more rigorous than simply toting up their number of publications. The hindex is designed to improve upon simpler measures such as the total number of citations or publications. But h- index works well only for comparing scientists working in the same field as citation conventions differ widely among different fields. But citation data have to be obtained from sources like the Web of Science, Google Scholar etc. Hirsch index thus measures the quality and sustainability and diversity of scientific output and thus addresses the problems with the SCI where a methodological paper could fetch the highest impact. A major limitation is that scientists who are very productive tend to have lower H number. Not surprisingly, almost as soon as Hirsch proposed the h index, researchers began
to develop variants to overcome its limitations (27). The Y-factor The Y-factor is a simple combination of both the IF and the weighted PageRank. Significantly, the authors claim that the resulting journal rankings correspond well to a general understanding of journal status. For example, while the IF ranking lists five review journals, the Y-factor column had none. Two primary Research Journals Cell and the Proceedings of the National Academy of Sciences USA, rated highly by peers, figure in the Y-factor list (See 4). Faculty of 1000 Peer ranking of research papers outside the citation number game has also been tried and a prominent one being the Faculty of 1000, a subscription-based literature awareness tool. Faculty of 1000 comprehensively and systematically highlights and reviews the most interesting papers published in disciplines as biology, medicine etc., based on the recommendations of thousands of carefully chosen researchers.(http://f1000biology.com/ about/faq). The Faculty of 1000 biology was launched in 2002 and Faculty of Medicine in 2006 and are run by scientists for scientists to provide a rapidly updated consensus map of the important papers and trends in a research area. These Faculty members evaluate papers based on their perceived merit than where they appear to evolve a consensus. Its content stays on the site and searchable through the MeSH terms of PubMed. The advantages of F-1000 include selection of papers outside the so called high impact journals like Nature, Science and Cell etc. The limitations : the mode of selection of the faculty it self as also the choice of papers considered to be of high quality as the journals sample is about 1000 only.
Indian J Physiol Pharmacol 2010; 54(3)
Impact Factor and other Indices to Assess Science 211
Faculty of 1000 Medicine comprises a select set of carefully chosen experts currently estimated to be over 2400 in various areas of medical research and clinical practice. The service is divided into 18 specialties and over 200 sub-specialties. Each member is expected to read and comment on one or two interesting articles every month. The final F1000 Factor is consensual incorporating the ratings it receives and the number of times it is selected by different Faculty Members. Outstanding work thus gets its deserved peer recognition irrespective and independent of citation counts (30). Y-factor In 2006 Johan Bollen and his colleagues at the Research Library of Los Alamos National Laboratory in New Mexico have proposed this measure that combines Google's PageRank with Impact Factor (31). Combining both metrics can help in the complete evaluation of a journal by comparing and aggregating the data. Like some journals can have high IF but low PRs ­ more popularity but less prestige and vice versa may be for a high-quality specialty journal. Also, using data from more than one metric make the tool more dependable and less prone to manipulation. This combined product of PR and IF is called the Y factor by Bollen. Data complied for PR, IF and both show that while the top ten list by IF includes many journals that publish only review articles, or that serve primarily as data resources, the Y-factor ranking is more for journals widely regarded as publishing prestigious original research. Significant differences have been found between the highest impact factor journals and Y factor journals in physics, computer science, and medicine, and less difference in a subspecialty of medicine, dermatology (31). Eigenfactor Developed by Carl Bergstrom, the
Eigenfactor (32) provides an online suite of tools that "ranks journals much as Google ranks websites". The data are taken from the Thomson Reuters databases. Available at no charge, the Eigenfactor is considered a measure of the journal's total importance to the scientific community. The Eigenfactor scores are scaled so that the sum of all journals listed in Thomson Reuters' Journal Citation Reports (JCR) is 100. Like in 2006, the journal Nature has the highest Eigenfactor score, with a score of 1992. The top thousand journals, as ranked by Eigenfactor score, all have Eigenfactor scores above 0.01. It attempts to account for the prestige of citing journals; incorporates many nonstandard items such as newspapers and Ph.D. Dissertations into the citation network; and evaluates items over a 5 year (rather than 2 year) period. Of particular interest to librarians, the "cost-effectiveness search" relates this data to the going subscription rates for journals as means of determining value-for-money. The "Article Influence" metric within the Eigenfactor is comparable to the impact factor, but that is just one aspect of the broader framework. Other initiatives- Other current initiatives include the MESUR (MEtrics from Scholarly Usage of Resources) project supported by Andrew W. Mellon Foundation, a two year effort to enrich "the toolkit used for the assessment of the impact of scholarly communication items, and hence of scholars, with metrics that derive from usage data" (27). Researchers at the Los Alamos National Laboratory, the same group which developed the Y factor, are also leading the MESUR initiative. The MESUR is considered the most comprehensive effort until now to study article impact evaluation techniques visa-vis modern scholarly communication practices that have undergone a sea change over the last decade.
212 Satyanarayana
Indian J Physiol Pharmacol 2010; 54(3)
REFERENCES
1. Gross PLK, Gross EM. College libraries and chemical education. Science LXVI (1713), 1927; 385­389. 2. Garfield E. Citation indexes for science: a new dimension in documentation through association of ideas. Science 1955; 122: 108­111. 3. Garfield E. The Agony and the Ecstasy--The History and Meaning of the Journal Impact Factor. Paper presented at the InterNational Congress on Peer Review and Biomedical Publication, Chicago, September 16, 2005. 4. Satyanarayana K, Sharma A. Impact factor: Time to move on. Indian J Med Res 2008; 127: pp. 4­6. 5. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997; 314: 498­502. 6. Editorial 2005. Not-so-deep impact. Research assessment rests too heavily on the inflated status of the impact factor. Nature 435: 1003­ 1004 doi:10.1038/4351003a. 7. Kurmis AP. Understanding the Limitations of the Journal Impact Factor. J Bone Joint Surg Am 2003; 85: 2449­2454. 8. Balaram P. Metrics of Science: Loosening the Stranglehold. Curr Sci 2009; 96: 12890­90. 9. Monastersky R. The number that's devouring science. The impact factor, once a simple way to rank scientific journals, has become an unyielding yardstick for hiring, tenure, and grants. Chron High Educ 2005; 52: A12. 10. Neuberger J, Counsell C. Impact factors: uses and abuses. Eur J Gastroenterol Hepatol 2002; 14: 209­211. 11. Rossner M, Von Epps V, Hill E. Show me the data. J Exp Med 2007; 204: 3052­3053 www.jem.org/cgi/doi/10.1084/jem.20072544. 12. The PLoS Medicine Editors. The impact factor game. It is time to find a better way to assess the scientific literature. PLoS Med 2006; 3: e291 doi:10.1371/journal.pmed.0030291. 13. Murali NS, Murali HR, Auethavekiat P, Erwin PJ, Mandrekar JN, Manek NJ, Ghosh AK. Impact of FUTON and NAA bias on visibility of research. Mayo Clinic Porceedings 2004; 79: 1001­1006. 14. Liu SV. Hwang's retracted publication still contributes to Science's impact factor. Sci Ethics 2007; 2: 44­45. 15. Satyanarayana K, JAMA, NEJM and beyond ­ Journal editing in the new millennium. Curr Sci 2000; 78: 225­227. 16. Moed HF, Burger WJM, Frankfort JG, Van Raan AFJ. The application of bibliometric indicators: important field- and time dependent factors to be considered. Scientometrics 1985, 8: 177­203.
17. Nakayama T, Fukui T. Comparison between impact factors and citations in Evidence-Based Practice Guidelines 2005.
18. Research Assessment Exercise Team (2006) "RAE2008: panel criteria and working methods." Available http://www.rae.ac.uk/pubs/2006/01/. Research Assessment Exercise Team 2006.
19. Narin F, Pinski G, H GH: Structure of the biomedical literature. J Am Soc Inf Sci 1976; 27: 25­45.
20. Resnick P. "Implications of Emerging Recommender and Reputation Systems," in Committee on Electronic, Scientific, Technical, and Medical Journal Publishing (ed.), Electronic Scientific, Technical and Medical Journal Publishing and its Implications: Report of a Symposium, Washington, 2004; pp. 49­50.
21. Dellavalle RP, Schilling LM, Rodriguez MA, Van de Sompel H, Bollen J. Refining dermatology impact factors using PageRank, J Am Acad Dermatol Vol. 57 No. 1, 2007; pp. 116­119.
22. Satyanarayana Time for `Publish in India' movement. Indian J Med Res 119, May 2004; pp. vii­ix.
23. Satyanarayana K. India & China: Time to catch up. Indian J Med Res 124, December 2006; pp. 597­600.
24. Satyanarayana K, Sharma A. The IJMR hits a new high. Indian J Med Res 128, August 2008; pp. 97­100.
25. Hirsch JE. An index to quantify an individual's scientific research output. PNAS 2005; 102: 16569­16572.
26. World Association of Medical Editors (http:// www.wame.org/resources/policies#impact)
27. Banks, Marcus A, Dellavalle, Robert. Emerging alternatives to the impact UC San Francisco: Retrieved from: escholarship.org/uc/item/3dp3t4jz.
(2008) factor. http://
28. Smith AG. A tale of two web spaces: comparing sites using web impact factors. J Documentation, 1999; 55(5): 577­592.
29. Thelwall M. Results from a web impact factor crawler. J Documentation, 2001; 57(2): 177­191.
30. Meho LI Which citation database: Web of Science, Scopus, or Google Scholar ? Special Libraries Association 2009 Annual Conference. http://units.sla.org/division/dst/Annual%20Conference/ 2009_Washington/Citation-Databases.pdf.
31. Bollen J, Rodrigues MA, Van de Sompel H. Journal Status. Scientometrics December 2006. Available from: http://arxiv.org/PS_cache/cs/pdf/ 0601/0601030v1.pdf.
32. Bergstrom CT, West JD, Wiseman MA. The Eigenfactor Metrics. J Neurosci 2008; 11433­11434.

K Satyanarayana

File: impact-factor-and-other-indices-to-assess-science-scientists.pdf
Title: 197-212
Author: K Satyanarayana
Author: admin
Subject: 197-212
Published: Tue Oct 5 13:12:41 2010
Pages: 16
File size: 0.06 Mb


CONSUMER PRICE INDICES, 1 pages, 0.15 Mb

, pages, 0 Mb

LITERATURE PDF, 2 pages, 0.05 Mb
Copyright © 2018 doc.uments.com