viernes, 12 de julio de 2013

Factor de impacto en los journales: ¿Para que sirven?

Journal impact factors: what are they good for?



The ISI journal impact factors for 2012 were released last month. Apparently 66 journals were banned from the list for trying to manipulate (through self-citations and “citation stacking”) their impact factors.

There’s a heated debate going on about impact factors: their meaning, use and mis-use, etc.  Science has an editorial discussing impact factor distortions.  One academic association, the American Society for Cell Biology, has put together a declaration (with 8500+ signers so far)–San Francisco Declaration on Research Assessment (DORA)–highlighting the problems caused by the abuse of journal impact factors and related measures. Problems with impact factors have in turn led to alternative metrics, for example see altmetrics.
I don’t really have problems with impact factors, per se.  They are one, among many, measures that might be used to measure journal quality.  Yes, I think some journals indeed  are better than others.  But using impact factors to somehow assess individual researchers can quickly lead to problems.  And, it is important to recognize that impact factors assume that articles within the journal are homogeneous, though within-journal citations of course are radically skewed.  Thus a few highly-cited pieces essentially prop up the vast majority of articles in any given journal. Citations might be a better measure, though also highly imperfect.  If you want to assess research quality: read the article itself.
On the whole, article effects trump journal effects (as Joel Baum’s article also points out, see here).  After all, we all have one-two+ favorite articles, published in some obscure journal no one has ever heard of.  Just do interesting work and send it to journals that you read.  OK, that’s a bit glib.  I know that all kinds of big issues hang in the balance when trying to assess and categorize research: tenure and promotion, resource flows, etc. Assessment and categorization is inevitable.
A focus on impact factors and related metrics can quickly lead to tiresome discussions about which journal is best, is that one better than this, what are the “A” journals, etc.  Boring.  I presented at a few universities in the UK a few years ago (the UK had just gone through its Research Assessment Exercise), and it seemed that many of my interactions with young scholars devolved into discussing which journal is an “A” versus “A-” versus “B.”  Our lunch conversations weren’t about ideas – it was disappointing, though also quite understandable since young scholars of course want to succeed in their careers.
Hopefully enlightened departments and schools will avoid the above traps and focus on the research itself.  I think the problems of impact factors are well-known by now and hopefully these types of metrics are used sparingly in any form of evaluation, and only as one imprecise datapoint among many others.
[Thanks for Joel Baum (U of Toronto) for sending me some of the above links.]

No hay comentarios:

Publicar un comentario