Altmetrics — Meretricious or Meritorious?

I think Jeffrey Beall has got this wrong. He claims that altmetrics are an “Ill-conceived and Meretricious Idea.”

On the other hand, I think Euan Adie has got this right. Here is his measured response (sorry, couldn’t resist) to Beall.

So, I come down on the meritorious side. Of course, none of this is to say that altmetrics are without flaws. But one thing they are decidedly good for is connecting academic researchers to those who read their research. That’s what scholarly communication is all about, in my book (sorry again).

Should we develop an alt-H-index? | Postmodern Research Evaluation | 4 of ?

In the last post in this series, I promised to present an alternative to Snowball Metrics — something I playfully referred to as ‘Snowflake Indicators’ in an effort to distinguish what I am proposing from the grand narrative presented by Snowball Metrics. But two recent developments have sparked a related thought that I want to pursue here first.

This morning, a post on the BMJ blog asks the question: Who will be the Google of altmetrics? The suggestion that we should have such an entity comes from Jason Priem, of course. He’s part of the altmetrics avant garde, and I always find what he has to say on the topic provocative. The BMJ blog post is also worth reading to get the lay of the land regarding the leaders of the altmetrics push.

Last Friday, the editors of the LSE Impact of Social Sciences blog contacted me and asked whether they might replace our messy ’56 indicators of impact’ with a cleaned-up and clarified version. I asked them to add it in, without simply replacing our messy version with their clean version, and they agreed. You can see the updated post here. I’ll come back to this later in more detail. For now, I want to ask a different, though related, question.

COULD WE DEVELOP AN ALT-H-INDEX?

The H-index is meant to be a measure of the productivity and impact of an individual scholar’s research on other researchers, though recently I’ve seen it applied to journals. But the original idea is to find the number of a researcher’s publications that have been cited at least X times. Of course, the actual number of one’s H-index will vary based on the citation data-base one is using. According to Scopus, for instance, my H-index is 4. A quick look at my Researcher ID and it’s easy enough to see that my H-index would be 1. Then, if we look at Google Scholar, we see that my H-index is 6. Differences such as these — and the related question of the value of such metrics as the H-index — are the subject of research being performed now by Kelli Barr (one of our excellent UNT/CSID graduate students).

Now, if it’s clear enough how the H-index is generated … well, let’s move on for the moment.

How would an alt-H-index be generated?

There are a several alternatives here. But let’s pursue the one that’s most parallel to the way the H-index is generated. So, let’s substitute products for articles and mentions for citations. One’s alt-H-index would then be the number of products P that have at least P mentions on things tracked by altmetricians.

I don’t have time at the moment to calculate my full alt-H-index. But let’s go with some things I have been tracking: my recent correspondence piece in Nature, the most recent LSE Impact of Social Sciences blog post (linked above), and my recently published article in Synthese on “What Is Interdisciplinary Communication?” [Of course, limiting myself to 3 products would mean that my alt-H-index couldn’t go above 3 for the purposes of this illustration.]

According to Impact Story, the correspondence piece in Nature has received  41 mentions (26 tweets, 6 Mendeley readers, and 9 CiteULike bookmarks). The LSE blog post has received 114 mentions (113 tweets and 1 bookmark). And the Synthese paper has received 5 (5 tweets). So, my alt-H-index would be 3, according to Impact Story.

According to Altmetric, the Nature correspondence has received 125 mentions (96 tweets, 9 Facebook posts/shares, 3 Google+ shares, blogged by 11, and 6 CiteULike bookmarks), the LSE Blog post cannot be measured, and the Synthese article has 11 mentions (3 tweets, 3 blogs, 1 Google+, 2 Mendeley, and 2 CiteULike). So, my alt-H-index would be 2, according to Altmetric data.

Comparing H-index and alt-H-index

So, as I note above, I’ve limited the calculations of my alt-h-index to three products. I have little doubt that my alt-h-index is considerably higher than my h-index — and would be so for most researchers who are active on social media and who publish in alt-academic venues, such as scholarly blogs (or, if you’re really cool like my colleague Adam Briggle, in Slate), or for fringe academics, such as my colleague  Keith Brown, who typically publishes almost exclusively in non-scholarly venues.

This illustrates a key difference between altmetrics and traditional bibliometrics. Altmetrics are considerably faster than traditional bibliometrics. It takes a long time for one’s H-index to go up. ‘Older’ researchers typically have higher H-indices than ‘younger’ researchers. I suspect that ‘younger’ researchers may well have higher alt-H-indices, since ‘younger’ researchers tend to be more active on social media and more prone to publish in the sorts of alt-academic venues mentioned above.

But there are also some interesting similarities. First, it makes a difference where you get your data. My H-index is 4, 1, or 6, depending on whether we use data from Scopus, Web of Science, or Google Scholar. My incomplete alt-H-index is either 3 or 2, depending on whether we use data from Impact Story or Altmetric. An interesting side note that ties in with the question of the Google of altmetrics is that the reason for the difference in my alt-H-index when using data from Impact Story and Altmetric is that Altmetric requires a DOI. With Impact Story, you can import URLs, which makes it considerably more flexible for certain products. In that respect, at least, Impact Story is more like Google Scholar — it covers more — whereas Altmetric is more like Scopus. That’s a sweeping generalization, but I think it’s basically right, in this one respect.

But these differences raise the more fundamental question, and one that serves as the beginning of a response to the update of my LSE Impact of Social Sciences blog piece:

SHOULD WE DEVELOP AN ALT-H-INDEX?

It’s easy enough to do it. But should we? Asking this question means exploring some of the larger ramifications of metrics in general — the point of my LSE Impact post. If we return to that post now, I think it becomes obvious why I wanted to keep our messy list of indicators alongside the ‘clarified’ list. The LSE-modified list divides our 56 indicators into two lists: one of ’50 indicators of positive impact’ and another of ‘6 more ambiguous indicators of impact’. Note that H-index is included on the ‘indicators of positive impact’ list. That there is a clear boundary between ‘indicators of positive impact’ and ‘more ambiguous indicators of impact’ — or ‘negative metrics’ as the Nature editors suggested — is precisely the sort of thinking our messy list of 56 indicators is meant to undermine.

H-index is ambiguous. It embodies all sorts of value judgments. It’s not a simple matter of working out the formula. The numbers that go into the formula will differ, depending on the data source used (Scopus, Web of Science, or Google Scholar), and these data also depend on value judgments. Metrics tend to be interpreted as objective. But we really need to reexamine what we mean by this. Altmetrics are the same as traditional bibliometrics in this sense — all metrics rest on prior value judgments.

As we note at the beginning of our Nature piece, articles may be cited for ‘positive’ or ‘negative’ reasons. More citations do not always mean a more ‘positive’ reception for one’s research. Similarly, a higher H-index does not always mean that one’s research has been more ‘positively’ received by peers. The simplest thing it means is that one has been at it longer. But even that is not necessarily the case. Similarly, a higher alt-H-index probably means that one has more social media influence — which, we must realize, is ambiguous. It’s not difficult to imagine that quite a few ‘more established’ or more traditional researchers could interpret a higher alt-H-index as indicating a lack of serious scholarly impact.

Here, then, is the bottom line: there are no unambiguously positive indicators of impact!

I will, I promise, propose my Snowflake Indicators framework as soon as possible.

Developing indicators of the impact of scholarly communication is a massive technical challenge – but it’s also much simpler than that | Impact of Social Sciences

Developing indicators of the impact of scholarly communication is a massive technical challenge – but it’s also much simpler than that | Impact of Social Sciences.

In which I expand on ideas presented here and here.

Calling All Thinkers — a plea for fostering diversity in thought

I am concerned that our educational system is blocking photorealistic visual thinkers like me from careers in science. Instead, we should encourage diversity in modes of thinking so that we aren’t losing the special talents of people who might contribute greatly to research and development by offering unique perspectives.

Calling All Thinkers | The Scientist Magazine®.

This is a good read. I do wish it included an image, though, beyond a photo of the author’s book cover. I often use images to try to make a similar point — that the question of impact is really a question of looking at different aspects of research, for instance. Here’s my go-to image for that claim:

Jastrow's Duck-Rabbit

It’s a simple point, but figuring out how to make it is difficult. I’m actually a big fan of the idea of involving the body somehow. I think this sort of perceptual shift is connected with our kinesthetic sense — it’s something we have to experience or feel.

If you care to get a sense of how I think, it’s shown by the ‘fact’ that I think these observations, above, are connected to both the Humboldtian idea of linking research and teaching and to my push to extend our thinking about altmetrics well beyond article-level metrics of even many different types of scholarly communication.

Open Access and Its Enemies

I was thrilled to be invited to participate as a speaker in the University of North Texas Open Access Symposium 2013. It’s ongoing, and it’s being recorded; video of the presentations will be available soon. In the meantime, I’ve posted slides from my presentation on figshare.

I thought I’d add some thoughts here expounding on my presentation a bit and relating it to presentations given by my fellow panelists. I’m a proponent of open access, for several reasons. I think closed access, that is, encountering a pay wall when one goes to download a piece of research on is interested in reading, is unjust as well as inconvenient. The case for this claim can best be made with reference to two main points revolving around the question of intellectual property rights. Generally, in the case of closed access publications, authors are asked to sign away many, if not all, of their copy rights. Now, authors are free to negotiate terms with publishers, and we are free not to sign away our copy rights — but often the only choice with which we are left by many publishers is simply to take our work and publish it somewhere else.

Many otherwise ‘closed’ publishers will allow authors to retain all their copy rights for a fee (which varies from publisher to publisher) — this is known as the ‘author pays’ model of Gold OA (the latter term refers to OA publications in journals, as opposed to publications made OA via some sort of repository, which is known as Green OA). There is probably no better source for learning the terminology surrounding OA than Peter Suber’s website.

There is also the argument that when publicly funded research is published, the public should at least have free (gratis) access to the publication. Some publishers have argued against this on the grounds that they add value to the research by running the peer review process and formatting and archiving the article. They do perform these services, which do cost money (though peer review itself is done for free by academics). So, they argue that simply giving away their labor is unjust. If it is unjust to have the public pay again and unjust to ask publishers to give away the results of their labors, then, many argue, the ‘author pays’ model of OA makes the most sense. This, of course, ignores the fact of the free labor of academics in conducting peer review. (The labor of actually writing articles is arguably covered as part of an author’s base salary.) But even if authors are already paid to write the articles, it doesn’t follow that it’s just to ask them to pay again to have the articles made freely available once they are published.

Publishers, including Sage, are experimenting with different versions of the ‘author pays’ model of Gold OA. Jim Gilden was another member of my panel. He discussed Sage’s foray into OA, some of their innovations (including the interesting idea of having article-level editors who run the peer review process for individual articles, rather than for the journal as a whole), and some of the difficulties they have encountered. Among those difficulties is some sort of prejudice among potential authors — and members of promotion and tenure committees — against OA journals. This surprised me a little, but perhaps it should not have. One of the themes of my own talk is that ‘we’ academics are included among the enemies of open access. Our prejudice against OA publications is one indicator of this fact.

The other member of our panel was Jeffrey Beall, best known for Beall’s List of Predatory Open Access Publishers. Jeffrey talked about his list, including how and why it got started. That story is pretty simple: he started getting spam emails from publishers that didn’t quite feel right; as a cataloger, he did what came naturally and started keeping track; thus, Beall’s List. Things got more complicated after that. Many publishers appearing on Beall’s list are none too happy about it. Some have even threatened to sue Jeffrey — one for the sum of $1 billion! There are other, less publicized, sources of friction Jeffrey has encountered. He’s not too popular with his own university’s external/community relations folks. And he’s subject to a negative portrayal by many advocates of open access, who don’t appreciate the negative attention Beall’s list draws to the open access movement.

Criticism of Beall from publishers on his list is to be expected. In fact, it was serendipitous that I wrapped up the panel and ended my presentation with the slide of CSID’s list of ‘56 indicators of impact‘ — a list that includes negative indicators, such as provoking lawsuits. Jeffrey serves as a very good example of the sort of thing we are getting at with our list. The most important fact is that he has a narrative to account for why getting sued for $1 billion actually indicates that he’s having an impact. Unless a publisher were worried that Beall’s list would hurt their business, why would they threaten to sue?

Jeffrey and Jim were both excellent panel-mates for another reason. All three of us are not exactly full-fledged members of the open access enthusiasts club. Beall can’t be included, since his list can be interpreted as portraying not only specific publishers, but also the whole OA movement, in a negative light. Gilden can’t be included, since, well, he works for a for-profit-publisher. Those folks tend to be seen as more or less evil by many of the members of the OA crowd. (It was interesting to me to see the folks at Mendeley trying to — and having to — defend themselves on Twitter after Mendeley was bought by Elsevier, the evilist of all evil publishers.) And I? Well, as I said at the beginning of this post, I am an advocate of open access. But I am not an uncritical advocate, and I argue that a greater critical spirit needs to be embraced by many OA enthusiasts.

This was, in essence, the point of my talk. The text parts are pretty clear, I think. So let me focus here mostly on the images, and especially on the ‘Images of Impact’ slides. First, I explained how I derived my title from Popper’s The Open Society and Its Enemies. This seemed fitting not only because of the play on words, but also because I have come to see much of the struggle surrounding open access in terms of different conceptions of liberty or freedom. Popper’s emphasis on individual liberty was something I wanted to expand on, and I also linked it with Isaiah Berlin’s account of positive and negative conceptions of liberty. I also think Popper has an ambiguous relation to Neoliberalism. Popper was an original member of the Mont Pèlerin thought collective that many credit with the development and dissemination of Neoliberalism.

That Popper’s relation to Neoliberalism is unclear is an important point — and it’s another reason I chose him to introduce my talk. Part of what I wanted to suggest was that much of the open access movement is susceptible to being subsumed under a neoliberal agenda. After all, both use similar vocabularies — references to openness, to crowds, and to efficiency abound in both movements.

I didn’t really dwell on this point for long, though, in deference to the Symposium’s keynote speaker’s views on ‘neoliberalism‘. At the same time, I did want at least to reference Neoliberalism as one thing members of the open access movement need to be more aware of. I’m worried there’s something like a dogmatic enthusiasm that’s creeping into the OA crowd. Many of the reactions from within the OA enthusiast club against Jeffrey Beall (or against Mendeley) seem to me to betray an uncritical (and I mean un-self-critical) attitude. Similarly, I think it would be better for OA enthusiasts to examine carefully and to think critically about OA mandates and policies being considered now. Most, I fear, only think in terms like ‘any movement in the direction of more open access is good’. I just don’t believe that. In fact, I think it’s dangerous to think that way.

Sorry — on to my images of impact. I love altmetrics. I think that’s where you find many of the brightest advocates of open access. I also think the development of altmetrics is one of the areas most fraught with peril. After all, given the penchant of neoliberals for measurement-for-management-for-efficiency that goes by the name of ‘accountability’, it’s not difficult to see how numbers in general, and altmetrics in particular, might be co-opted by someone who wanted to do away with peer review and the protection that provides to the scholarly community. Talk of open, transparent, accountable government sounds great. But come on, folks, let’s please think about what that means. That drones are part of that plan ought to give us all pause. Altmetrics are the drones of the OA movement.

This is by no means to say that altmetrics are bad. I love altmetrics. I have said publicly that I think every journal should employ some form of article level metrics. They’re amazing. But they are also ripe for abuse — by publishers, by governments, and by academic administrators, among others. I just want altmetrics developers to recognize that possibility and to give it careful thought.

The development of altmetrics is not simply a technical issue. Nor are technologies morally or politically neutral. I suggested that we consider altmetrics (and perhaps OA in general) as a sociotechnical imaginary. I think the concept fits well here, especially linked to the idea of OA as a movement that entails an idea of positive freedom. There is a vision of the good associated with OA. Technology is supposed to help us along the road to achieving that good. Government policies are being enacted that may help. But we need to think critically about all of this rather than rushing forward in a burst of enthusiasm.

The great danger of positive freedom is that it can lead to coercion and even totalitarianism. The question is whether we can place a governor on our enthusiasm and limit our pursuit if positive freedom in a way that still allows for autonomy. I refer to Philip Petitt‘s notion of non-domination as potentially useful in this context. I also suggest that narrative can play a governing role. I do think we need some sort of localized (not totalizing) metanarrative about the relationship between the university and society (this is what I referred to in my talk in terms of a ‘republic of knowledge’). But narrative must also serve the role of de-totalizing in another sense. Narratives should be tied to articles and accompany article level metrics. We need to put the ‘account’ back into accountability, rather than simply focusing on the idea of counting.

So, to sum it all up: OA is good, but not an unqualified good; altmetrics are great, but they need to be accompanied by narratives. The end.

Quick thoughts on Challenges of Measuring Social Impact Using Altmetrics

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

This and other claims worth discussing appear in this hot-off-the-presses (do we need another metaphor now?) article from Mike Taylor (@herrison):

The Challenges of Measuring Social Impact Using Altmetrics – Research Trends.

In response to the quote above, my own proposal would be to incorporate altmetrics into an overall narrative of impact. In other words, rather than have something like a ‘separate’ altmetric report, I’d rather have a way of appealing to altmetrics as one form of empirical evidence to back up claims of impact.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

Good point about the difference between ‘social reach’ and ‘social impact’. My suggestion for developing an approach to understanding the link between social reach and social impact would be something like this: social reach provides evidence of a sort of interaction. What’s needed to demonstrate social impact, however, is evidence of behavior change. Even if one cannot establish a direct causal relation between sharing and behavior change, demonstrating that one’s research ‘reached’ someone who then changed her behavior in ways consistent with what one’s paper says would generate a plausible narrative of impact.

 

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

My response to this may be predictable, but here goes anyway. I am all for improving the technology. Using Natural Language Processing, as Taylor suggests a bit later, sounds promising. But I think there’s a fundamental problem with comparing altmetrics to bibliometrics and trying to bring the former up to the standards of rigor of the latter. The problem is that this view privileges technology and technical rigor over judgment. Look, let’s make altmetrics as rigorous as we can. But please, let’s not make the mistake of thinking we’ve got the question of impact resolved once altmetrics have achieved the same sort of methodological rigor as bibliometrics! The question of impact can be answered better with help from technology. But to assume that technology can answer the question on its own (as if it existed independently of human beings, or we from it), is to fall into the trap of the technological fix.

San Francisco Declaration on Research Assessment — Well done, DORA

Anyone interested in research assessment should read this with care.

DORA.

It’s been presented in the media as an insurrection against the use of the Journal Impact Factor — and the Declaration certainly does … ehr … declare that the JIF shouldn’t be used to assess individual researchers or individual research articles. But this soundbite shouldn’t be used to characterize the totality of DORA, which is much broader than that.

Honestly, it took me a few days to go read it. After all, it’s uncontroversial in my mind that the JIF shouldn’t be used in this way. So, an insurrection against it didn’t strike me as all that interesting. I’m all for the use of altmetrics and — obviously, given our recent Nature correspondence (free to read here) — other inventive ways to tell the story of our impact.

But, and I cannot stress this enough, everyone should give DORA a careful read. I’m against jumping uncritically on the bandwagon in favor of Openness in all its forms. But I could find little reason not to sign, and myriad reasons to do so.

Well done, DORA.