How journals like Nature, Cell and Science are damaging science | Randy Schekman | Comment is free | The Guardian

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called \”impact factor\” – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.

via How journals like Nature, Cell and Science are damaging science | Randy Schekman | Comment is free | The Guardian.

Thanks to my colleague Diana Hicks for pointing this out to me.

The last line of the quotation strikes me as the most interesting point, one that deserves further development. The steering effect of metrics is well known (Weingart 2005). There’s growing resistance to the Journal Impact Factor. Although the persuasive comparison between researchers and bankers is itself over the top, the last line suggests — at least to me — a better way to critique the reliance on the Journal Impact Factor, as well as other attempts to measure research. It’s a sort of reverse Kant with an Illichian flavor, which I will formulate as a principle here, provided that everyone promises to keep in mind my attitude toward principles.

Here is one formulation of the principle: Measure researchers only in ways that recognize them as autonomous agents, never merely as means to other ends.

Here is another: Never treat measures as ends in themselves.

Once measures, which are instruments to the core, take on a life of their own, we have crossed the line that Illich calls the second watershed. That the Journal Impact Factor has in fact crossed that line is the claim made in the quote, above, though not using Illich’s language. The question we should be asking is how researchers can manage measures, rather than how we can measure researchers in order to manage them.
_______________________________________________________

Peter Weingart. Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics Vol. 62, No. 1 (2005) 117-131.

The Rise and Fall of PLOS ONE’s Impact Factor (2012 = 3.730)

This should be read along with Paul Wouter’s post (below). Lots of confusion surrounding Journal Impact Factor, I think.

The evidence on the Journal Impact Factor

With the release of the new Journal Impact Factors, everyone should read this blog posted by Paul Wouters at “The Citation Culture.”

The Citation Culture

The San Francisco Declaration on Research Assessment (DORA), see our most recent blogpost, focuses on the Journal Impact Factor, published in the Web of Science by Thomson Reuters. It is a strong plea to base research assessments of individual researchers, research groups and submitted grant proposals not on journal metrics but on article-based metrics combined with peer review. DORA cites a few scientometric studies to bolster this argument. So what is the evidence we have about the JIF?

In the 1990s, the Norwegian researcher Per Seglen, based at our sister institute the Institute for Studies in Higher Education and Research (NIFU) in Oslo and a number of CWTS researchers (in particular Henk Moed and Thed van Leeuwen) developed a systematic critique of the JIF, its validity as well as the way it is calculated (Moed & Van Leeuwen, 1996; Moed & Leeuwen, 1995; Seglen, 1997). This line of research…

View original post 1,366 more words