Quick thoughts on Challenges of Measuring Social Impact Using Altmetrics

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

This and other claims worth discussing appear in this hot-off-the-presses (do we need another metaphor now?) article from Mike Taylor (@herrison):

The Challenges of Measuring Social Impact Using Altmetrics – Research Trends.

In response to the quote above, my own proposal would be to incorporate altmetrics into an overall narrative of impact. In other words, rather than have something like a ‘separate’ altmetric report, I’d rather have a way of appealing to altmetrics as one form of empirical evidence to back up claims of impact.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

Good point about the difference between ‘social reach’ and ‘social impact’. My suggestion for developing an approach to understanding the link between social reach and social impact would be something like this: social reach provides evidence of a sort of interaction. What’s needed to demonstrate social impact, however, is evidence of behavior change. Even if one cannot establish a direct causal relation between sharing and behavior change, demonstrating that one’s research ‘reached’ someone who then changed her behavior in ways consistent with what one’s paper says would generate a plausible narrative of impact.

 

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

My response to this may be predictable, but here goes anyway. I am all for improving the technology. Using Natural Language Processing, as Taylor suggests a bit later, sounds promising. But I think there’s a fundamental problem with comparing altmetrics to bibliometrics and trying to bring the former up to the standards of rigor of the latter. The problem is that this view privileges technology and technical rigor over judgment. Look, let’s make altmetrics as rigorous as we can. But please, let’s not make the mistake of thinking we’ve got the question of impact resolved once altmetrics have achieved the same sort of methodological rigor as bibliometrics! The question of impact can be answered better with help from technology. But to assume that technology can answer the question on its own (as if it existed independently of human beings, or we from it), is to fall into the trap of the technological fix.