Quick thoughts on Challenges of Measuring Social Impact Using Altmetrics

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

This and other claims worth discussing appear in this hot-off-the-presses (do we need another metaphor now?) article from Mike Taylor (@herrison):

The Challenges of Measuring Social Impact Using Altmetrics – Research Trends.

In response to the quote above, my own proposal would be to incorporate altmetrics into an overall narrative of impact. In other words, rather than have something like a ‘separate’ altmetric report, I’d rather have a way of appealing to altmetrics as one form of empirical evidence to back up claims of impact.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

Good point about the difference between ‘social reach’ and ‘social impact’. My suggestion for developing an approach to understanding the link between social reach and social impact would be something like this: social reach provides evidence of a sort of interaction. What’s needed to demonstrate social impact, however, is evidence of behavior change. Even if one cannot establish a direct causal relation between sharing and behavior change, demonstrating that one’s research ‘reached’ someone who then changed her behavior in ways consistent with what one’s paper says would generate a plausible narrative of impact.

 

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

My response to this may be predictable, but here goes anyway. I am all for improving the technology. Using Natural Language Processing, as Taylor suggests a bit later, sounds promising. But I think there’s a fundamental problem with comparing altmetrics to bibliometrics and trying to bring the former up to the standards of rigor of the latter. The problem is that this view privileges technology and technical rigor over judgment. Look, let’s make altmetrics as rigorous as we can. But please, let’s not make the mistake of thinking we’ve got the question of impact resolved once altmetrics have achieved the same sort of methodological rigor as bibliometrics! The question of impact can be answered better with help from technology. But to assume that technology can answer the question on its own (as if it existed independently of human beings, or we from it), is to fall into the trap of the technological fix.

On Learning from Peer Review | Extending ‘Peers’ to Include non-Academics

Absent from the many analyses and discussions of scientific peer review are two intangible but very important byproducts: 1) feedback to the applicant and 2) exposure of the reviewers to new hypotheses, techniques, and approaches. Both of these phenomena have a virtual mentoring effect that helps move science forward. Such learning can occur as a consequence of both manuscript review and grant application review, but the review of grant applications, by its very nature, is more iterative and impacts the direction in which research moves very early in the investigation.

Opinion: Learning from Peer Review | The Scientist Magazine®.

There are at least two funding agencies that recognize this phenomenon in the actual design of their peer review processes, so they deserve mention. The idea is to include non-academics as peer reviewers precisely to effect the sort of co-production of knowledge the article above suggests.

The first is STW, the Dutch Technology Foundation. I outline their peer review process in this article, available Open Access.

The second is the US Congressionally Directed Medical Research Program. Details of their peer review process are available on their website here.

San Francisco Declaration on Research Assessment — Well done, DORA

Anyone interested in research assessment should read this with care.

DORA.

It’s been presented in the media as an insurrection against the use of the Journal Impact Factor — and the Declaration certainly does … ehr … declare that the JIF shouldn’t be used to assess individual researchers or individual research articles. But this soundbite shouldn’t be used to characterize the totality of DORA, which is much broader than that.

Honestly, it took me a few days to go read it. After all, it’s uncontroversial in my mind that the JIF shouldn’t be used in this way. So, an insurrection against it didn’t strike me as all that interesting. I’m all for the use of altmetrics and — obviously, given our recent Nature correspondence (free to read here) — other inventive ways to tell the story of our impact.

But, and I cannot stress this enough, everyone should give DORA a careful read. I’m against jumping uncritically on the bandwagon in favor of Openness in all its forms. But I could find little reason not to sign, and myriad reasons to do so.

Well done, DORA.

‘Pure hype of pure research helps no one’ says Sarewitz; what this says about freedom

I tend to agree with a lot of what Dan Sarewitz argues here:

Pure hype of pure research helps no one : Nature News & Comment.

But I also want to suggest that there’s an argument to be made against the High Quality Research Act that goes beyond Sarewitz’s claim that it helps no one.

To be fair, that’s just the headline. Sarewitz also claims something I think is a bit more controversial — that the HQRA is really nothing to get too worried about. Not only does it help no one, but also it doesn’t hurt anyone.

This strikes me as mistaken. I’ll try to articulate why in terms of the distinction between negative and positive freedom I’ve been exploring. Here goes.

First, I agree that the HQRA helps no one.  But it’s not just that the HQRA is redundant — though this is certainly true. It’s also that it doesn’t allow us to do anything more to demonstrate our accountability, as I think the Broader Impacts Criterion does. In other words, it doesn’t increase anyone’s positive freedom.

Second, it actually decreases our negative freedom. By requiring NSF to re-certify what the merit review process already certifies (at least when it’s working as designed), this ‘added layer of accountability’ actually just increases the kind of bureaucratic red tape we should be trying to decrease if we’re interested in an efficient government. This makes about as much sense as the Florida Blue Ribbon Task Force’s suggestion to charge more for classes in majors that supposedly won’t result in better jobs for graduates. Majors that result in higher paying jobs actually should be in greater demand, and so should cost more, not less. But not according to the Blue Ribbon Task Force (see pp 22-3).

Finally, I think the HQRA might be a case study in how to reconcile notions of positive and negative freedom — or at least how to think of both ideas of liberty as possibly working together. It’s sort of a test. Sometimes, a policy that might increase our positive freedom can be seen as decreasing our negative freedom. I think the NSF’s Broader Impacts Criterion is a case in point. Yes, it places an additional burden on researchers, and so in this sense it is a limitation on their negative freedom. But it also increases their positive freedom, so that a trade-off is possible. The HQRA, on the other hand, decreases our negative freedom without also increasing our positive freedom. Any policy that doesn’t increase either our positive or our negative freedom is highly questionable — or so I am suggesting.

Will The High Quality Research Act Diminish Our Collective Cognitive Dissonance?

I plan to follow up on David’s post and Dan’s argument in Nature, soon. Until then, enjoy this!

David Bruggeman's avatarPasco Phronesis

The High Quality Research Act is a draft bill from Representative Lamar Smith, Chair of the House Science, Space and Technology Committee.  Still not officially introduced, it has prompted a fair amount of teeth gnashing and garment rending over what it might mean.  The bill would require the Director of the National Science Foundation (NSF) to certify that the research it funds would: serve the national interests, be of the highest quality, and is not duplicative of other research projects being funded by the federal government.  The bill would also prompt a study to see how such requirements could be implemented in other federal science agencies.

There’s a lot there to explore, including how the bill fits into recent inquiries about specific research grants made by the National Institutes of Health (NIH) and the NSF.  (One nice place to check on this is the AmericanScience team blog.)

But…

View original post 278 more words

Altmetrics for the Nature correspondence on negative metrics of impact

Fascinating.

Article details.

We need negative metrics, too / Nature

Keith Brown, Kelli Barr, and I have a short piece published in the new issue of Nature.

The correspondence also contains a link to a slightly revised version of our original submission. Since Nature keeps everything behind a pay wall, here is that link.

Very interested in hearing everyone’s thoughts on the idea that seemingly negative events could be turned to indicate positive impact.

Impact from beyond the grave: how to ensure impact grows greater with the demise of the author | Impact of Social Sciences

We all know — don’t we? — that our H-index can only grow with the passage of time. But Geoffrey Alderman has a plan, an impact plan, to ensure that our impact keeps growing in other ways, as well.

This is funny, and I’m sure Professor Alderman is poking fun at the very idea of impact. Nevertheless, there’s a serious angle to this. Many of us, whether we want to admit it or not, are involved in academia in an effort to change the world. And many of us are well aware that we may have to wait to be born posthumously, as Nietzsche said.

In any case, while we play the long game, it’s nice to have diversions such as this, occasionally:

Impact from beyond the grave: how to ensure impact grows greater with the demise of the author | Impact of Social Sciences.

 

NSF Says No to Congressman’s Request for Reviewer Comments – ScienceInsider

The latest in the showdown between Rep. Lamar Smith and NSF.

NSF Says No to Congressman’s Request for Reviewer Comments – ScienceInsider.

Interesting to think about the limits of confidentiality here.

What does it take to be ‘liked’ by scientists?

Scientists don’t like me. Or, at least, they don’t show any evidence of liking what I have to say about NSF’s Broader Impacts Merit Review Criterion. Last week, I blogged this ScienceInsider interview (here and on the CSID blog) with an unnamed congressional aide connected with Rep. Lamar Smith and his efforts to add “an extra layer of accountability” to NSF’s Merit Review Process.

I also left a couple of comments in the comments section under the article itself. It’s possible for readers of ScienceInsider to press buttons to indicate their agreement — or not — with comments. The site then tracks the number of likes or dislikes (expressed by pressing up or down carrots), displays them with each comment, and moves those comments with the most likes up to the top.

Guess whose comments are dead last in line?

Here are the two most-liked comments:

lollardy3 days ago

Studying dairy production in China is a very poor choice for an example of what constitutes a bad grant. It has direct relevance to something most people in America consume every day. It could reduce cost for millions, increase food safety, improve the quality or nutrient density of a commonly consumed item, etc. Every time I hear a story on Fox about a “wasteful” study, I can usually think of ten ways it could benefit people and industry here. Somehow I think the time would be better spent putting in an “additional layer” to cover pentagon spending.

Kenneth DeBacker4 days ago

A lot of smoke is being blown by Rep. Lamar Smith’s aide. The aide’s answers are slick and cover’s the real intent of the bill- to politicize the sciences through selective funding or defunding of areas of study Republicans do not like. The most egregious example would be the ban on studying gun violence in America.

Each of them has received twelve likes.

I suppose if I were simply to say that Congress is out to politicize science or that Smith is out of his depth or that scientists should be left alone to pursue research however they wish, scientists might like that. But I’m willing to give Smith the benefit of the doubt, at this point. My contention is that he (or his aide) doesn’t yet understand the revisions to NSF’s Merit Review Process. If he did, then I think he’d see that accountability is already built into the process. I think Smith should not introduce the High Quality Research Act, but instead should seek to monitor how scientists respond to the new Broader Impacts Criterion.

But there’s a real problem with what I’m suggesting. And it’s not that Smith is a Republican out to get science. The problem is that scientists themselves don’t understand the Broader Impacts Criterion. They don’t understand that this is their last, best hope to preserve their academic autonomy while meeting accountability demands. And they don’t want to hear it, either.

To see my comments on the ScienceInsider interview, simply follow this link and scroll to the bottom of the page.