These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called \”impact factor\” – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.
Thanks to my colleague Diana Hicks for pointing this out to me.
The last line of the quotation strikes me as the most interesting point, one that deserves further development. The steering effect of metrics is well known (Weingart 2005). There’s growing resistance to the Journal Impact Factor. Although the persuasive comparison between researchers and bankers is itself over the top, the last line suggests — at least to me — a better way to critique the reliance on the Journal Impact Factor, as well as other attempts to measure research. It’s a sort of reverse Kant with an Illichian flavor, which I will formulate as a principle here, provided that everyone promises to keep in mind my attitude toward principles.
Here is one formulation of the principle: Measure researchers only in ways that recognize them as autonomous agents, never merely as means to other ends.
Here is another: Never treat measures as ends in themselves.
Once measures, which are instruments to the core, take on a life of their own, we have crossed the line that Illich calls the second watershed. That the Journal Impact Factor has in fact crossed that line is the claim made in the quote, above, though not using Illich’s language. The question we should be asking is how researchers can manage measures, rather than how we can measure researchers in order to manage them.
Peter Weingart. Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics Vol. 62, No. 1 (2005) 117-131.
To what degree is quantity being substituted for quality in today’s research assessment exercises? This strikes me as a symptom of the overvaluation of efficiency.
Higgs said he became \”an embarrassment to the department when they did research assessment exercises\”. A message would go around the department saying: \”Please give a list of your recent publications.\” Higgs said: \”I would send back a statement: \’None.\’ \”
Thanks to Lance Weihmuller for pointing me to the article.
The contest was an opportunity for members of the scientific community to showcase the broader impacts of the biological sciences, including informing natural resources management, addressing climate change, and advancing foundational knowledge. The photos will be used to help the public and policymakers to better understand the value of biological research and education.
This is definitely worth a look, whether you’re into the idea of post-publication peer review or not.
The Consortium of Social Science Associations held its Annual Colloquium on Social And Behavioral Sciences and Public Policy earlier this week. Amongst the speakers was Acting National Science Foundation (NSF) Director Cora Marrett.* As part of her remarks, she addressed how the Foundation was implementing the Coburn Amendment, which added additional criteria to funding political science research projects through NSF.
The first batch of reviews subject to these new requirements tookplace in early 2013. In addition to the usual criteria of intellectual merit and broader impacts, the reviewers looked at the ‘most meritorious’ proposals and examined how they contribute to economic development and/or national security. For the reviews scheduled for early 2014, all three ‘criteria’ will be reviewed at once.
Since researchers don’t like to be told what to do, they aren’t happy. But Marrett asserts through her remarks that this additional review will not really affect the…
View original post 183 more words
Francis Rememdios has organized a session at the 4S Annual Meeting in which he, David Budtz Pedersen, and I will serve as critics of Steve Fuller’s book Preparing for Life in Humanity 2.0. We’ll be live tweeting as much as possible during the session, using the hashtag #humanity2 for those who want to follow. There is also a more general #4s2013 that should be interesting to follow for the next few days.
Here are the abstracts for our talks:
Humanity 2.0, Synthetic Biology, and Risk Assessment
Francis Remedios, Social Epistemology Editorial Board member
As a follow-up to Fuller’s Humanity 2.0, which is concerned with the impact of biosciences and nanosciences on humanity, Preparing for Life in Humanity 2.0 provides a more detailed analysis. Possible futures are discussed are: the ecological, the biomedical and the cybernetic. In the Proactionary Imperative, Fuller and Lipinska aver that for the human condition, the proactionary principle, which is risk taking, is an essential part should be favored over the precautionary principle, which is risk aversion. In terms of policy and ethics, which version of risk assessment should be used for synthetic biology, a branch of biotechnology? With synthetic biology, life is created from inanimate material. Synthetic biology has been dubbed life 2.0. Should one principle be favored over the other?