Developing Metrics for the Evaluation of Individual Researchers – Should Bibliometricians Be Left to Their Own Devices?

So, I am sorry to have missed most of the Atlanta Conference on Science and Innovation Policy. On the other hand, I wouldn’t trade my involvement with the AAAS Committee on Scientific Freedom and Responsibility for any other academic opportunity. I love the CSFR meetings, and I think we may even be able to make a difference occasionally. I always leave the meetings energized and thinking about what I can do next.

That said, I am really happy to be on my way back to the ATL to participate in the last day of the Atlanta Conference. Ismael Rafols asked me to participate in a roundtable discussion with Cassidy Sugimoto and him (to be chaired by Diana Hicks). Like I’d say ‘no’ to that invitation!

The topic will be the recent discussions among bibliometricians of the development of metrics for individual researchers. That sounds like a great conversation to me! Of course, when I indicated to Ismael that I was bascially against the idea of bibliometricians coming up with standards for individual-level metrics, Ismael laughed and said the conversation should be interesting.

I’m not going to present a paper; just some thoughts. But I did start writing on the plane. Here’s what I have so far:

Bibliometrics are now increasingly being used in ways that go beyond their design. Bibliometricians are now increasingly asking how they should react to such unintended uses of the tools they developed. The issue of unintended consequences – especially of technologies designed with one purpose in mind, but which can be repurposed – is not new, of course. And bibliometricians have been asking questions – ethical questions, but also policy questions – essentially since the beginning of the development of bibliometrics. If anyone is sensitive to the fact that numbers are not neutral, it is surely the bibliometricians.

This sensitivity to numbers, however, especially when combined with great technical skill and large data sets, can also be a weakness. Bibliometricians are also aware of this phenomenon, though perhaps to a lesser degree than one might like. There are exceptions. The discussion by Paul Wouters, Wolfgang Glänzel, Jochen Gläser, and Ismael Rafols regarding this “urgent debate in bibliometrics,” is one indication of such awareness. Recent sessions at ISSI in Vienna and STI2013 in Berlin on which Wouters et al. report are other indicators that the bibliometrics community feels a sense of urgency, especially with regard to the question of measuring the performance of individual researchers.

That such questions are being raised and discussed by bibliometricians is certainly a positive development. One cannot fault bibliometricians for wanting to take responsibility for the unintended consequences of their own inventions. But one – I would prefer to say ‘we’ – cannot allow this responsibility to be assumed only by members of the bibliometrics community.

It’s not so much that I don’t want to blame them for not having thought through possible other uses of their metrics — holding them to what Carl Mitcham calls a duty plus respicare: to take more into account than the purpose for which something was initially designed. It’s that I don’t want to leave it to them to try to fix things. Bibliometricians, after all, are a disciplinary community. They have standards; but I worry they also think their standards ought to be the standards. That’s the same sort of naivety that got us in this mess in the first place.

Look, if you’re going to invent a device someone else can command (deans and provosts with research evaluation metrics are like teenagers driving cars), you ought at least to have thought about how those others might use it in ways you didn’t intend. But since you didn’t, don’t try to come in now with your standards as if you know best.

Bibliometrics are not the province of bibliometricians anymore. They’re part of academe. And we academics need to take ownership of them. We shouldn’t let administrators drive in our neighborhoods without some sort of oversight. We should learn to drive ourselves so we can determine the rules of the road. If the bibliometricians want to help, that’s cool. But I am not going to let the Fordists figure out academe for me.

With the development of individual level bibliometrics, we now have the ability — and the interest — to own our own metrics. What we want to avoid at all costs is having metrics take over our world so that they end up steering us rather than us driving them. We don’t want what’s happened with the car to happen with bibliometrics. What we want is to stop at the level at which bibliometrics of individual researchers maximize the power and creativity of individual researchers. Once we standardize metrics, it makes it that much easier to institutionalize them.

It’s not metrics themselves that we academics should resist. ‘Impact’ is a great opportunity, if we own it. But by all means, we should resist the institutionalization of standardized metrics. A first step is to resist their standardization.

Coming soon …

Image

– Featuring nearly 200 entirely new entries

– All entries revised and updated

– Plus expanded coverage of engineering topics and global perspectives

– Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents

Andy Stirling on why the precautionary principle matters | Science | guardian.co.uk

SPRU Professor Andy Stirling is beginning a series in The Guardian on the precautionary principle. Stirling’s first article paints an optimistic picture:

Far from the pessimistic caricature, precaution actually celebrates the full depth and potential for human agency in knowledge and innovation. Blinkered risk assessment ignores both positive and negative implications of uncertainty. Though politically inconvenient for some, precaution simply acknowledges this scope and choice. So, while mistaken rhetorical rejections of precaution add further poison to current political tensions around technology, precaution itself offers an antidote – one that is in the best traditions of rationality. By upholding both scientific rigour and democratic accountability under uncertainty, precaution offers a means to help reconcile these increasingly sundered Enlightenment cultures.

via Why the precautionary principle matters | Andy Stirling | Science | guardian.co.uk.

Stirling’s work on the precautionary principle is some of the best out there, and Adam Briggle and I cite him in our working paper on the topic. I look forward to reading the rest of Stirling’s series. Although I’m a critic of the Enlightenment, I don’t reject it wholesale. In fact, I think rational engagement with the thinkers of the Enlightenment — and some of its most interesting heirs, including Stirling and Steve Fuller, who’s a proponent of proaction over precaution — is important. So, stay tuned for more!

Postmodern Research Evaluation? | 1 of ?

This will be the first is a series of posts tagged ‘postmodern research evaluation’ — a series meant to be critical and normative, expressing my own, subjective, opinions on the question.

Before I launch into any definitions, take a look at this on ‘Snowball Metrics‘. Reading only the first few pages should help orient you to where I am coming from. It’s a place from where I hope to prevent such an approach to metrics from snowballing — a good place, I think, for a snowball fight.

Read the opening pages of the snowball report. If you cannot see this as totalizing — in a very bad way — then we see things very differently. Still, I hope you read on, my friend. Perhaps I still have a chance to prevent the avalanche.

Nigel Warburton’s negative vision of what philosophy isn’t

Philosopher Nigel Warburton, of philosophy bites fame, has just resigned his academic post at the Open University to pursue other opportunities. The Philosopher’s Magazine conducts an extended interview with Warburton here. Much of what he reveals in this interview is both entertaining and, in my opinion, true.

But one aspect of the interview especially caught my attention. After offering several criticisms of academic philosophy today with which I’m in total agreement (in particular the tendency of hiring committees to hire clones of themselves rather than enhancing the diversity of the department), Warburton offers what he seems to view as the ultimate take down of academic philosophy. I quote this section in full, below. If you’ve been paying any attention to this blog or our posts at CSID, you’ll understand why, immediately.

He reserves particular venom for the REF, the Research Excellence Framework, a system of expert review which assesses research undertaken in UK higher education, which is then used to allocate future rounds of funding. A lot of it turns on the importance of research having a social, economic or cultural impact. It’s not exactly the sort of thing that philosophical reflection on, say, the nature of being qua being is likely to have. He leans into my recorder to make sure I get every word:

“One of the most disturbing things about academic philosophy today is the way that so many supposed gadflies and rebels in philosophy have just rolled over in the face of the REF – particularly by going along with the idea of measuring and quantifying impact,” he says, making inverted commas with his fingers, “a technical notion which was constructed for completely different disciplines. I’m not even sure what research means in philosophy. Philosophers are struggling to find ways of describing what they do as having impact as defined by people who don’t seem to appreciate what sort of things they do. This is absurd. Why are you wasting your time? Why aren’t you standing up and saying philosophy’s not like that? To think that funding in higher education in philosophy is going to be determined partly by people’s creative writing about how they have impact with their work. Just by entering into this you’ve compromised yourself as a philosopher. It’s not the kind of thing that Socrates did or that Hume did or that John Locke did. Locke may have had patrons, but he seemed to write what he thought rather than kowtowing to forces which are pushing on to us a certain vision, a certain view of what philosophical activities should be. Why are you doing this? I’m getting out. For those of you left in, how can you call yourselves philosophers? This isn’t what philosophy’s about.”

Please tell us how you really feel, Dr. Warburton.

In the US, we are not subject to the REF. But we are subject to many, many managerial requirements, including, if we seek grant funding, the requirement that we account for the impact of our research. We are, of course, ‘free’ to opt out of this sort of requirement simply by not seeking grant funding. Universities in the UK, however, are not ‘free’ to opt out of the REF. So, are the only choices open to ‘real’ philosophers worthy of the name resistance or removing oneself from the university, as Warburton has chosen?

I think not. My colleagues and I recently published an article in which we present a positive vision of academic philosophy today. A key aspect of our position is that the question of impact is itself a philosophical, not merely a technical, problem. Philosophers, in particular, should own impact rather than allowing impact to be imposed on us by outside authorities. The question of impact is a case study in whether the sort of account of freedom as non-domination offered by Pettit can be instantiated in a policy context, in addition to posited in political philosophy.

Being able to see impact as a philosophical question rests on being able to question the idea that the only sort of freedom worth having is freedom from interference. If philosophy matters to more than isolated individuals — even if connected by social media — then we have to realize that any philosophically rich conception of liberty must also include responsibility to others. Our notion of autonomy need not be reduced to the sort of non-interference that can only be guaranteed by separation (of the university from society, as Humboldt advocated, or of the philosopher from the university, as Warburton now suggests). Autonomy must be linked to accountability — and we philosophers should be able to tackle this problem without being called out as non-philosophers by someone who has chosen to opt out of this struggle.

Ross Mounce lays out easy steps towards open scholarship | Impact of Social Sciences

Excellent post with lots of good information here;

Easy steps towards open scholarship | Impact of Social Sciences.

There are some especially good thoughts about preprints.

Ross is right, I think, that using preprints is uncommon in the Humanities. For anyone interested in exploring the idea, I recommend the Social Epistemology Review and Reply Collective. Aside from being one of the few places to publish preprints in the Humanities, the SERRC preprints section also allows for extended responses to posted preprints, such as this one. The one major drawback (as Ross points out about sites such as Academia.edu) is that the SERRC doesn’t really archive preprints in the way that, say, a library would. Of course, if you happen to have an institutional repository, you can use that, as well.

Another site worth mentioning in this context is peerevaluation.org. I posted the same preprint on my page there. There are two interesting features of the peerevaluation.org site. One is that it uses interesting metrics, such as the ‘trust’ function. Similar to Facebook ‘likes’, but much richer, the ‘trust’ function allows users to build a visible reputation as a ‘trusted’ reviewer. What’s that, you ask? As a reviewer? Yes, and this is the second interesting feature of peerevaluation.org. It allows one to request reviews of posted papers. It also keeps track of who reviewed what. In theory, this could allow for something like ‘bottom-up’ peer review by genuine peers. One drawback of peerevaluation.org is that not enough people actually participate as reviewers. I encourage you to visit the site and serve as a reviewer to explore the possibilities.

As a humanist who would like to take advantage of preprints, both to improve my own work and for the citation advantage Ross notes, it’s difficult not to envy the situation in Physics and related areas (with arxiv). But how does such a tradition start? There are places one can use to publish preprints in the humanities. We need to start using them.

Quick thoughts on Challenges of Measuring Social Impact Using Altmetrics

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

This and other claims worth discussing appear in this hot-off-the-presses (do we need another metaphor now?) article from Mike Taylor (@herrison):

The Challenges of Measuring Social Impact Using Altmetrics – Research Trends.

In response to the quote above, my own proposal would be to incorporate altmetrics into an overall narrative of impact. In other words, rather than have something like a ‘separate’ altmetric report, I’d rather have a way of appealing to altmetrics as one form of empirical evidence to back up claims of impact.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

Good point about the difference between ‘social reach’ and ‘social impact’. My suggestion for developing an approach to understanding the link between social reach and social impact would be something like this: social reach provides evidence of a sort of interaction. What’s needed to demonstrate social impact, however, is evidence of behavior change. Even if one cannot establish a direct causal relation between sharing and behavior change, demonstrating that one’s research ‘reached’ someone who then changed her behavior in ways consistent with what one’s paper says would generate a plausible narrative of impact.

 

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

My response to this may be predictable, but here goes anyway. I am all for improving the technology. Using Natural Language Processing, as Taylor suggests a bit later, sounds promising. But I think there’s a fundamental problem with comparing altmetrics to bibliometrics and trying to bring the former up to the standards of rigor of the latter. The problem is that this view privileges technology and technical rigor over judgment. Look, let’s make altmetrics as rigorous as we can. But please, let’s not make the mistake of thinking we’ve got the question of impact resolved once altmetrics have achieved the same sort of methodological rigor as bibliometrics! The question of impact can be answered better with help from technology. But to assume that technology can answer the question on its own (as if it existed independently of human beings, or we from it), is to fall into the trap of the technological fix.