Are you interested in submitting a Perspective Article? Be sure to read The Science Advisory Board's Editorial Guides for Perspective Articles. Click here.
The Impact of the Impact Factor
by Wim D’Haeze, Ph.D.
Approximately thirty years ago, the Institute for Scientific Information® (ISI®) introduced a new parameter, called the journal’s impact factor. This factor intended to measure the influence of a particular journal in a specific discipline or field of study by normalizing differences between established and more recent journal titles. Originally, this impact factor was initiated to serve as a parameter to comparatively evaluate major and minor journals in a particular discipline, and to use it as a criterion to welcome or reject new journals in the ISI® database, or to re-evaluate existing ISI® journals. Thus, the introduction of the impact factor was primarily meant to enable the ISI® to rank (scientific) journals that significantly influence the international research community in a particular field of study.
The impact factor is a metric value. It is an arithmetic measure that demonstrates how much an article published in a specific journal has influenced the subsequent work of researchers in the field of the publication in the second and third years following the publication’s initial appearance in that journal, based on the frequency with which other researchers refer to the publication. The impact factor of a journal is calculated by dividing citations of the current year and the source items published in that journal during the previous two years. In other words, the impact factor of journal X in 2003 would be calculated as the quotient of the number of cites to source articles in 2003 over the total number of source articles published in journal X in 2002 and 2001. Thus, the impact factor is indicative of the overall value of a particular publication supposing that the influence of this publication on the scholar community is correlated with the frequency with which researchers of a given field cite publications appearing in the individual journal.
A careful reiteration of the definition of the impact factor easily demonstrates that it is rather variable. The absolute value is influenced by a variety of sociological and statistical factors; some illustrative examples are mentioned hereafter. Journals of fundamental or general subject areas have higher impact factors than specialized and applied ones. This variation may be so significant that the impact factor of the top journal in one subject area is lower than that of the journal with the lowest impact factor in another subject area. The number of authors on a paper (on an average two for social sciences and over five for life sciences) also influences the impact factor, because authors have the tendency to cite their own publications. The publication type matters as well. Journals that publish rapidly and relative short papers (such as Letters, Reports, and Brief Communications) tend to have a higher impact factor than those that publish full-story articles with a longer production time. Interestingly, the impact factor of a journal that roughly publishes 35 papers per annum may vary with approximately 40% from one year to another, whereas this variation decreases to (still) approximately 15% for a journal that publishes up to 150 papers annually. Clearly, the impact factor should be considered as a relative value to compare specific journals within a particular field of research, and roughly two journals that belong to the same field of research with an impact factor that differs less than 25% may be considered as belonging to the same rank.
The introduction of the journal’s impact factor and its subsequent (inappropriate) interpretation by the broad scientific community had major consequences for authors, editors and publishers. For obvious reasons outlined below, authors are pressured to publish as much as possible in ‘top’ journals (i.e. journals with the highest impact factors). Not the fact that the scope of the research corresponds with that of a particular journal matters, but the absolute value of the impact factor is decisive in selecting the journal in which to present one’s work. Because these top journals often require the data to be new and unprecedented, conclusions to be based on a plethora of experiments, questions related to biological systems to be addressed that attract a majority of the scientific community, and reports to be written down in such a manner as to take the least possible space of the journal, the outcome is not seldom a scientific report that leaves a taste of competition behind and that is hard to follow even for specialists in the field. Of course, thinking in terms of impact factors, editors are tantalized to pull (or keep) ‘their’ journal to the top of the list. Manuscripts that do not fit in the journal (or whose publication might negatively influence the journal’s impact factor) are rejected without additional constructive scientific comments to the authors. Editors will change the content of the journal, the style of the articles published, and the major scientific scope of the journal, not necessarily to better serve the demanding scientific community, but to aim at a considerably higher impact factor. Lastly, publishers often use the journal’s impact factor as a commercial tool, which, in case of high-impact factor journals, almost forces institutional libraries and individual laboratories to subscribe, giving them additional economical power to be used, for instance, when new subscription rates need to be considered.
The journal’s impact factor becomes even more cumbersome when it is used by professionals as an indicator of the quality of an individual scientist, research group and/or institution. Some department chairs require researchers who apply for a (tenured) faculty to have published in journals with an impact factor above 5; if not, they may miss the train. Directors of biotechnology institutions tackling topics as various as bacterial and plant genetics, computational genomics, and unraveling pathways leading to human diseases at the molecular level may only consider publications in journals with an impact factor higher than 5. Administrators of funding agencies may allocate grants to scientist A, who has one second-author publication in a top journal, but not scientist B who has five first-author papers in lower impact factor journals; even so when scientist A robotically performed the experiments that were designed by the principal investigator, who wrote and submitted the manuscript in question and scientist B performed research independently. Such decisions are foolhardy, considering the above mentioned definition, aim, and annual fluctuations of the impact factors.
Unfortunately, these real examples are not too uncommon in today’s scientific communities and are clearly abuses of the utilization of the journal’s impact factor. In my opinion, there is no correlation between the impact factor of a journal and the quality of a particular publication or the quality of a particular scientist publishing in a specific journal. Ideally, to evaluate the quality of a given publication or a given scientist, knowledge should not be required of who wrote the article, in which journal the manuscript was published, or in which journals the scientists published their observations. Quality is inherent to the article or the scientist. The major consequences of using the journal’s impact factor as a measurement of quality is that authors are getting obsessed of particular top journals; directors, department chairs, administrators are getting obsessed of researchers who published in top journals; scientists are fuelling competition among each other rather than collaborations; and, more importantly, not the scientific message reported in a particular publication is crucial, but the journal in which the research appeared seems to be pivotal.
Deeper thinking about the journal’s impact factors and its common misuse leaves us with many more questions. Why do scientists accept to be quoted based on the impact factor of the journal in which they published their work? What is the relevance of the impact factor in today’s e-library and e-publishing environment? What is the strength of a report published in one of the top journals if it hardly influenced scientists of the research field of specialty or if it seems to be hardly cited? How qualified is scientist A, who published one article in a top journal, if he/she is unable to interpret observations in a self-critical manner? Is a publication ‘bad’ when it is published in a journal with an impact factor of 4.850 in year X and is another publication ‘good’ when published in the same journal with an impact factor of 5.215 the next year? It would be more comfortable for all of us to forget about the journal’s impact factor as a mark for quality and reserve it as a parameter for internal journal ranking at the ISI®. Completely independently of any impact factor, authors should critically scrutinize, on the one hand, the scientific contributions that they would like to report to a scientific community, and, on the other hand, the scope and aim of several putatively suitable journals. Subsequently, the authors should strive for a perfect liaison between the scientific story to be considered for publication and the journal of choice to publish it. The quality of this balance should be analyzed in a purely scientific manner during an open-minded interaction between authors, editors, and peer-reviewers. This highly mature attitude would undoubtedly be beneficial for all participants.
Wim D’Haeze is Bio-Engineer in Chemistry and received his Ph.D. in Biotechnology at the Ghent University (Belgium) in June 2001. His doctoral thesis work was focused on the understanding of several early steps during the symbiotic interaction between the Gram-negative soil bacterium Azorhizobium caulinodans and the tropical legume Sesbania rostrata. The initial steps require the production of bacterial compounds including signal molecules and complex surface polysaccharides, that are pivotal for invasion of the plant tissue and the formation of new organ tissues. In the three subsequent years, he performed post-doctoral research at the Complex Carbohydrate Research Center at the University of Georgia (Athens, GA) dealing in part with the structural and functional characterization of azorhizobial extracellular polysaccharides. Currently, Dr. Wim D’Haeze is employed at The Scripps Research Institute (La Jolla, CA) as Science Writer and focuses on a new horizon to decipher the molecular basis of devastative neurodegenerative diseases, such as Alzheimer’s and Parkinson’s diseases, in order to screen for and develop new therapeutics.
<< Previous Next >>
[ View All Perspectives ]