Should we develop an alt-H-index? | Postmodern Research Evaluation | 4 of ?

In the last post in this series, I promised to present an alternative to Snowball Metrics — something I playfully referred to as ‘Snowflake Indicators’ in an effort to distinguish what I am proposing from the grand narrative presented by Snowball Metrics. But two recent developments have sparked a related thought that I want to pursue here first.

This morning, a post on the BMJ blog asks the question: Who will be the Google of altmetrics? The suggestion that we should have such an entity comes from Jason Priem, of course. He’s part of the altmetrics avant garde, and I always find what he has to say on the topic provocative. The BMJ blog post is also worth reading to get the lay of the land regarding the leaders of the altmetrics push.

Last Friday, the editors of the LSE Impact of Social Sciences blog contacted me and asked whether they might replace our messy ’56 indicators of impact’ with a cleaned-up and clarified version. I asked them to add it in, without simply replacing our messy version with their clean version, and they agreed. You can see the updated post here. I’ll come back to this later in more detail. For now, I want to ask a different, though related, question.

COULD WE DEVELOP AN ALT-H-INDEX?

The H-index is meant to be a measure of the productivity and impact of an individual scholar’s research on other researchers, though recently I’ve seen it applied to journals. But the original idea is to find the number of a researcher’s publications that have been cited at least X times. Of course, the actual number of one’s H-index will vary based on the citation data-base one is using. According to Scopus, for instance, my H-index is 4. A quick look at my Researcher ID and it’s easy enough to see that my H-index would be 1. Then, if we look at Google Scholar, we see that my H-index is 6. Differences such as these — and the related question of the value of such metrics as the H-index — are the subject of research being performed now by Kelli Barr (one of our excellent UNT/CSID graduate students).

Now, if it’s clear enough how the H-index is generated … well, let’s move on for the moment.

How would an alt-H-index be generated?

There are a several alternatives here. But let’s pursue the one that’s most parallel to the way the H-index is generated. So, let’s substitute products for articles and mentions for citations. One’s alt-H-index would then be the number of products P that have at least P mentions on things tracked by altmetricians.

I don’t have time at the moment to calculate my full alt-H-index. But let’s go with some things I have been tracking: my recent correspondence piece in Nature, the most recent LSE Impact of Social Sciences blog post (linked above), and my recently published article in Synthese on “What Is Interdisciplinary Communication?” [Of course, limiting myself to 3 products would mean that my alt-H-index couldn’t go above 3 for the purposes of this illustration.]

According to Impact Story, the correspondence piece in Nature has received  41 mentions (26 tweets, 6 Mendeley readers, and 9 CiteULike bookmarks). The LSE blog post has received 114 mentions (113 tweets and 1 bookmark). And the Synthese paper has received 5 (5 tweets). So, my alt-H-index would be 3, according to Impact Story.

According to Altmetric, the Nature correspondence has received 125 mentions (96 tweets, 9 Facebook posts/shares, 3 Google+ shares, blogged by 11, and 6 CiteULike bookmarks), the LSE Blog post cannot be measured, and the Synthese article has 11 mentions (3 tweets, 3 blogs, 1 Google+, 2 Mendeley, and 2 CiteULike). So, my alt-H-index would be 2, according to Altmetric data.

Comparing H-index and alt-H-index

So, as I note above, I’ve limited the calculations of my alt-h-index to three products. I have little doubt that my alt-h-index is considerably higher than my h-index — and would be so for most researchers who are active on social media and who publish in alt-academic venues, such as scholarly blogs (or, if you’re really cool like my colleague Adam Briggle, in Slate), or for fringe academics, such as my colleague  Keith Brown, who typically publishes almost exclusively in non-scholarly venues.

This illustrates a key difference between altmetrics and traditional bibliometrics. Altmetrics are considerably faster than traditional bibliometrics. It takes a long time for one’s H-index to go up. ‘Older’ researchers typically have higher H-indices than ‘younger’ researchers. I suspect that ‘younger’ researchers may well have higher alt-H-indices, since ‘younger’ researchers tend to be more active on social media and more prone to publish in the sorts of alt-academic venues mentioned above.

But there are also some interesting similarities. First, it makes a difference where you get your data. My H-index is 4, 1, or 6, depending on whether we use data from Scopus, Web of Science, or Google Scholar. My incomplete alt-H-index is either 3 or 2, depending on whether we use data from Impact Story or Altmetric. An interesting side note that ties in with the question of the Google of altmetrics is that the reason for the difference in my alt-H-index when using data from Impact Story and Altmetric is that Altmetric requires a DOI. With Impact Story, you can import URLs, which makes it considerably more flexible for certain products. In that respect, at least, Impact Story is more like Google Scholar — it covers more — whereas Altmetric is more like Scopus. That’s a sweeping generalization, but I think it’s basically right, in this one respect.

But these differences raise the more fundamental question, and one that serves as the beginning of a response to the update of my LSE Impact of Social Sciences blog piece:

SHOULD WE DEVELOP AN ALT-H-INDEX?

It’s easy enough to do it. But should we? Asking this question means exploring some of the larger ramifications of metrics in general — the point of my LSE Impact post. If we return to that post now, I think it becomes obvious why I wanted to keep our messy list of indicators alongside the ‘clarified’ list. The LSE-modified list divides our 56 indicators into two lists: one of ’50 indicators of positive impact’ and another of ‘6 more ambiguous indicators of impact’. Note that H-index is included on the ‘indicators of positive impact’ list. That there is a clear boundary between ‘indicators of positive impact’ and ‘more ambiguous indicators of impact’ — or ‘negative metrics’ as the Nature editors suggested — is precisely the sort of thinking our messy list of 56 indicators is meant to undermine.

H-index is ambiguous. It embodies all sorts of value judgments. It’s not a simple matter of working out the formula. The numbers that go into the formula will differ, depending on the data source used (Scopus, Web of Science, or Google Scholar), and these data also depend on value judgments. Metrics tend to be interpreted as objective. But we really need to reexamine what we mean by this. Altmetrics are the same as traditional bibliometrics in this sense — all metrics rest on prior value judgments.

As we note at the beginning of our Nature piece, articles may be cited for ‘positive’ or ‘negative’ reasons. More citations do not always mean a more ‘positive’ reception for one’s research. Similarly, a higher H-index does not always mean that one’s research has been more ‘positively’ received by peers. The simplest thing it means is that one has been at it longer. But even that is not necessarily the case. Similarly, a higher alt-H-index probably means that one has more social media influence — which, we must realize, is ambiguous. It’s not difficult to imagine that quite a few ‘more established’ or more traditional researchers could interpret a higher alt-H-index as indicating a lack of serious scholarly impact.

Here, then, is the bottom line: there are no unambiguously positive indicators of impact!

I will, I promise, propose my Snowflake Indicators framework as soon as possible.

One thought on “Should we develop an alt-H-index? | Postmodern Research Evaluation | 4 of ?

  1. Pingback: Snowflake Indicators | Postmodern Research Evaluation | Part 5 of ? | jbrittholbrook

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.