Open Access and Its Enemies, Redux

I don’t have time to be doing this, but it’s important. Making time is a state of mind — as, claims Cameron Neylon, is ‘Open’:

Being open as opposed to making open resources (or making resources open) is about embracing a particular form of humility. For the creator it is about embracing the idea that – despite knowing more about what you have done than any other person –  the use and application of your work is something that you cannot predict.

There’s a lot to unpack, even in this short excerpt from Neylon’s post. Whether, for instance, the idea of ‘humility’ is captured by being open to unintended applications of ones work — surely that’s part, but only part, of being open — deserves further thought. But I really do think Cameron is on to something with the idea that being open entails a sort of humility.

To see how, it’s instructive to read through Robin Osborne’s post on ‘Why open access makes no sense‘:

For those who wish to have access, there is an admission cost: they must invest in the education prerequisite to enable them to understand the language used. Current publication practices work to ensure that the entry threshold for understanding my language is as low as possible. Open access will raise that entry threshold. Much more will be downloaded; much less will be understood.

There’s a lot to unpack here, as well.  There’s a sort of jiujitsu going on in this excerpt that requires that one is at least familiar with — if it is not one’s characteristic feeling — the feeling that no one will ever understand. What is obvious, however, is Osborne’s arrogance: there is a price to be paid to understand me, and open access will actually raise that price.

In my original talk on “Open Access and Its Enemies” I traced one source of disagreement about open access to different conceptions of freedom. Those with a negative concept of freedom are opposed to any sort of open access mandates, for instance, while those appealing to a positive concept of freedom might accept certain mandates as not necessarily opposed to their freedom. There may be exceptions, of course, but those with a positive concept of freedom tend to accept open access, while those with a negative view of freedom tend to oppose it. The two posts from Neylon and Osborne reveal another aspect of what divides academics on the question of open access — a different sense of self.

For advocates of humility, seeing our selves as individuals interferes with openness. In fact, it is only in contrast to those who view the self as an individual that the appeal to humility makes sense. The plea is that they temper their individualistic tendencies, to humble their individual selves in the service of our corporate self.   For advocates of openness, the self is something that really comes about only through interaction with others.

Advocates of elitism acknowledge that the social bond is important. But it is not, in itself, constitutive of the self. On the contrary, the self is what persists independently of others, whether anyone else understands us or not. Moreover, understanding me — qua individual — requires that you — qua individual — discipline yourself, learn something, be educated. Indeed, to become a self in good standing with the elite requires a certain self-abnegation — but only for a time, and only until one can re-assert oneself as an elite individual. Importantly, self-abnegation is a temporary stop on the way to full self-realization.

Self-sacrifice is foreign to both of the advocate of humility and the advocate of elitism, I fear. Yet it is only through self-sacrifice that communication is possible. Self-sacrifice doesn’t dissolve the individual self completely into the corporate self. Nor does self-sacrifice recognize temporary self-abnegation on the road to self-assertion as the path to communication. Self-sacrifice takes us beyond both, in that it requires that we admit that content is never what’s communicated. A self with a truly open mindset would have to be able to experience this.  Alas, no one will ever understand me!

 

The digital scholar

I couldn’t sleep last night, so I opened up my computer. That’s guaranteed not to help me sleep, of course. But the work of a digital scholar is never done.

I checked Twitter while my Outlook inbox was updating. It’s interesting how these two digital tools work in concert. Twitter is ephemeral and invites quick scans. Email, it turns out, slows me down. And, since email also takes longer to load, I usually start my day checking Twitter first.

Today, I hit on a tweet by Mark Carrigan (@mark_carrigan) to this post on the Sociological Imagination blog. It’s worth reading in its own right, but it also led me to look up Martin Weller’s (@mweller) book The Digital Scholar, which is available to read free here, and which is related to this blog. I’ve just started to read it, but there’s something approaching a phenomenology of digital scholarship going on there. I’ll be interested to compare it with Kathleen Fitzpatrick’s (@kfitz) Planned Obsolescence, which also has an associated blog. I wonder how much each of them are having similar thoughts to mine. It’s interesting to be able to discover community in the digital realm. I even doubt that the ease of communication these days (in the sense of Bataille’s ‘weak communication’) interferes with the sort of communication (in the sense of Bataille’s ‘strong communication’) that makes community possible.

As I was flitting back and forth between email, this post, and twitter, Mark Carrigan tweeted something about the difference between blogs and physical notebooks. I think he’s right that there’s a difference. I also think one can still use blogs much as one used notebooks in the past. I’m doing so here. But I’ll also publish this post so others can add their thoughts to mine.

Should we develop an alt-H-index? | Postmodern Research Evaluation | 4 of ?

In the last post in this series, I promised to present an alternative to Snowball Metrics — something I playfully referred to as ‘Snowflake Indicators’ in an effort to distinguish what I am proposing from the grand narrative presented by Snowball Metrics. But two recent developments have sparked a related thought that I want to pursue here first.

This morning, a post on the BMJ blog asks the question: Who will be the Google of altmetrics? The suggestion that we should have such an entity comes from Jason Priem, of course. He’s part of the altmetrics avant garde, and I always find what he has to say on the topic provocative. The BMJ blog post is also worth reading to get the lay of the land regarding the leaders of the altmetrics push.

Last Friday, the editors of the LSE Impact of Social Sciences blog contacted me and asked whether they might replace our messy ’56 indicators of impact’ with a cleaned-up and clarified version. I asked them to add it in, without simply replacing our messy version with their clean version, and they agreed. You can see the updated post here. I’ll come back to this later in more detail. For now, I want to ask a different, though related, question.

COULD WE DEVELOP AN ALT-H-INDEX?

The H-index is meant to be a measure of the productivity and impact of an individual scholar’s research on other researchers, though recently I’ve seen it applied to journals. But the original idea is to find the number of a researcher’s publications that have been cited at least X times. Of course, the actual number of one’s H-index will vary based on the citation data-base one is using. According to Scopus, for instance, my H-index is 4. A quick look at my Researcher ID and it’s easy enough to see that my H-index would be 1. Then, if we look at Google Scholar, we see that my H-index is 6. Differences such as these — and the related question of the value of such metrics as the H-index — are the subject of research being performed now by Kelli Barr (one of our excellent UNT/CSID graduate students).

Now, if it’s clear enough how the H-index is generated … well, let’s move on for the moment.

How would an alt-H-index be generated?

There are a several alternatives here. But let’s pursue the one that’s most parallel to the way the H-index is generated. So, let’s substitute products for articles and mentions for citations. One’s alt-H-index would then be the number of products P that have at least P mentions on things tracked by altmetricians.

I don’t have time at the moment to calculate my full alt-H-index. But let’s go with some things I have been tracking: my recent correspondence piece in Nature, the most recent LSE Impact of Social Sciences blog post (linked above), and my recently published article in Synthese on “What Is Interdisciplinary Communication?” [Of course, limiting myself to 3 products would mean that my alt-H-index couldn’t go above 3 for the purposes of this illustration.]

According to Impact Story, the correspondence piece in Nature has received  41 mentions (26 tweets, 6 Mendeley readers, and 9 CiteULike bookmarks). The LSE blog post has received 114 mentions (113 tweets and 1 bookmark). And the Synthese paper has received 5 (5 tweets). So, my alt-H-index would be 3, according to Impact Story.

According to Altmetric, the Nature correspondence has received 125 mentions (96 tweets, 9 Facebook posts/shares, 3 Google+ shares, blogged by 11, and 6 CiteULike bookmarks), the LSE Blog post cannot be measured, and the Synthese article has 11 mentions (3 tweets, 3 blogs, 1 Google+, 2 Mendeley, and 2 CiteULike). So, my alt-H-index would be 2, according to Altmetric data.

Comparing H-index and alt-H-index

So, as I note above, I’ve limited the calculations of my alt-h-index to three products. I have little doubt that my alt-h-index is considerably higher than my h-index — and would be so for most researchers who are active on social media and who publish in alt-academic venues, such as scholarly blogs (or, if you’re really cool like my colleague Adam Briggle, in Slate), or for fringe academics, such as my colleague  Keith Brown, who typically publishes almost exclusively in non-scholarly venues.

This illustrates a key difference between altmetrics and traditional bibliometrics. Altmetrics are considerably faster than traditional bibliometrics. It takes a long time for one’s H-index to go up. ‘Older’ researchers typically have higher H-indices than ‘younger’ researchers. I suspect that ‘younger’ researchers may well have higher alt-H-indices, since ‘younger’ researchers tend to be more active on social media and more prone to publish in the sorts of alt-academic venues mentioned above.

But there are also some interesting similarities. First, it makes a difference where you get your data. My H-index is 4, 1, or 6, depending on whether we use data from Scopus, Web of Science, or Google Scholar. My incomplete alt-H-index is either 3 or 2, depending on whether we use data from Impact Story or Altmetric. An interesting side note that ties in with the question of the Google of altmetrics is that the reason for the difference in my alt-H-index when using data from Impact Story and Altmetric is that Altmetric requires a DOI. With Impact Story, you can import URLs, which makes it considerably more flexible for certain products. In that respect, at least, Impact Story is more like Google Scholar — it covers more — whereas Altmetric is more like Scopus. That’s a sweeping generalization, but I think it’s basically right, in this one respect.

But these differences raise the more fundamental question, and one that serves as the beginning of a response to the update of my LSE Impact of Social Sciences blog piece:

SHOULD WE DEVELOP AN ALT-H-INDEX?

It’s easy enough to do it. But should we? Asking this question means exploring some of the larger ramifications of metrics in general — the point of my LSE Impact post. If we return to that post now, I think it becomes obvious why I wanted to keep our messy list of indicators alongside the ‘clarified’ list. The LSE-modified list divides our 56 indicators into two lists: one of ’50 indicators of positive impact’ and another of ‘6 more ambiguous indicators of impact’. Note that H-index is included on the ‘indicators of positive impact’ list. That there is a clear boundary between ‘indicators of positive impact’ and ‘more ambiguous indicators of impact’ — or ‘negative metrics’ as the Nature editors suggested — is precisely the sort of thinking our messy list of 56 indicators is meant to undermine.

H-index is ambiguous. It embodies all sorts of value judgments. It’s not a simple matter of working out the formula. The numbers that go into the formula will differ, depending on the data source used (Scopus, Web of Science, or Google Scholar), and these data also depend on value judgments. Metrics tend to be interpreted as objective. But we really need to reexamine what we mean by this. Altmetrics are the same as traditional bibliometrics in this sense — all metrics rest on prior value judgments.

As we note at the beginning of our Nature piece, articles may be cited for ‘positive’ or ‘negative’ reasons. More citations do not always mean a more ‘positive’ reception for one’s research. Similarly, a higher H-index does not always mean that one’s research has been more ‘positively’ received by peers. The simplest thing it means is that one has been at it longer. But even that is not necessarily the case. Similarly, a higher alt-H-index probably means that one has more social media influence — which, we must realize, is ambiguous. It’s not difficult to imagine that quite a few ‘more established’ or more traditional researchers could interpret a higher alt-H-index as indicating a lack of serious scholarly impact.

Here, then, is the bottom line: there are no unambiguously positive indicators of impact!

I will, I promise, propose my Snowflake Indicators framework as soon as possible.

Other infrequently asked questions about impact

Here are some other infrequently asked questions about impact that didn’t make it into the final cut of my piece at the LSE Impact of Social Sciences Blog.

Why conflate impact with benefit?

Put differently, why assume that all impacts are positive or benefits to society? Obviously, no one wants publicly supported research not to benefit the public. It’s even less palatable to consider that some publicly supported research may actually harm the public. But it’s wishful thinking to assume that all impacts are beneficial. Some impacts that initially appear beneficial may have negative consequences. And seemingly negative indicators might actually show that one is having an impact – even a positive one. I discuss this point with reference to Jeffrey Beall, recently threatened with a $1 billion lawsuit, here.

The question of impact is an opportunity to discuss such issues, rather than retreating into the shelter of imagined value-neutrality or objectivity. It was to spark this discussion that we generated a CSID-specific list – it is purposely idiosyncratic.

How can we maximize our impact?

I grant that ‘How can we maximize our impact?’ is a logistical question; but it incorporates a healthy dose of logos. Asking how to maximize our impacts should appeal to academics. We may be choosey about the sort of impact we desire and on whom; but no one wants to have minimal impact. We all desire to have as much impact as possible. Or, if we don’t, please get another job and let some of us who do want to make a difference have yours.

Wherefore impact?

For what reason are we concerned with the impact of scholarly communication? It’s the most fundamental question we should be asking and answering. We need to be mindful that whatever metrics we devise will have a steering effect on the course of scholarly communications. If we are going to steer scholarly communications, then we should discuss where we plan to go – and where others might steer us.

Broader Impacts and Intellectual Merit: Paradigm Shift? | NOT UNTIL YOU CITE US!

On the one hand, this post on the VCU website is very cool.  It contains some interesting observations and what I think is some good advice for researchers submitting and reviewing NSF proposals.

Broader Impacts and Intellectual Merit: Paradigm Shift? | CHS Sponsored Programs.

On the other hand, this post also illustrates how researchers’ broader impacts go unnoticed.

One of my main areas of research is peer review at S&T funding agencies, such as NSF. I especially focus on the incorporation of societal impact criteria, such as NSF’s Broader Impacts Merit Review Criterion. In fact, I published the first scholarly article on broader impacts in 2005. My colleagues at CSID and I have published more than anyone else on this topic. Most of our research was sponsored by NSF.

I don’t just perform research on broader impacts, though. I take the idea that scholarly research should have some impact on the world seriously, and I try to put it into practice. One of the things I try to do is reach out to scientists, engineers, and research development professionals in an effort to help them improve the attention to broader impacts in the proposals they are working to submit to NSF. This past May, for instance, I traveled down to Austin to give a presentation at the National Organization for Research Development Professionals Conference (NORDP 2013). You can see a PDF version of my presentation at figshare.

If you look at the slides, you may recognize a point I made in a previous post, today. That point is that ‘intellectual merit’ and ‘broader impact’ are simply different perspectives on research. I made this point at NORDP 2013, as well, as you can see from my slides. Notice how they put the point on the VCU site:

Broader Impacts are just another aspect of their research that needs to be communicated (as opposed to an additional thing that must be “tacked on”).

I couldn’t have said it better myself. Or perhaps I could. Or perhaps I did. At NORDP 2013.

Again, VCU says:

Presenters at both conferences [they refer to something called NCURA, with that hyperlink, and to NORDP, with no hyperlink] have encouraged faculty to take the new and improved criteria seriously, citing that Broader Impacts are designed to answer accountability demands.  If Broader Impacts are not carefully communicated so that they are clear to all (even non-scientific types!), a door could be opened for more prescriptive national research priorities in the future—a move that would limit what types of projects can receive federal funding, and would ultimately inhibit basic research.

Unless someone else is starting to sound a lot like us, THIS IS OUR MESSAGE!

My point is not to claim ownership over these ideas. If I were worried about intellectual property, I could trademark a broader impacts catch phrase or something. My point is that if researchers don’t get any credit for the broader impacts of their research, they’ll be disinclined to engage in activities that might have broader impacts. I’m happy to share these ideas. How else could I expect to have a broader impact? I’ll continue to share them, even without attribution. That’s part of the code.

To clarify: I’m not mad. In fact, I’m happy to see these ideas on the VCU site (or elsewhere …). But would it kill them to add a hyperlink or two? Or a name? Or something? I’d be really impressed if they added a link to this post.

I’m also claiming this as evidence of the broader impacts of my research. I don’t have to contact any lawyers for that, do I?

UPDATE: BRIGITTE PFISTER, AUTHOR OF THE POST TO WHICH I DIRECTED MY DIATRIBE, ABOVE, HAS RESPONDED HERE. I APPRECIATE THAT A LOT. I ALSO LEFT A COMMENT APOLOGIZING FOR MY TONE IN THE ABOVE POST. IT’S AWAITING MODERATION; BUT I HOPE IT’S ACCEPTED AS IT’S MEANT — AS AN APOLOGY AND AS A SIGN OF RESPECT.

Calling All Thinkers — a plea for fostering diversity in thought

I am concerned that our educational system is blocking photorealistic visual thinkers like me from careers in science. Instead, we should encourage diversity in modes of thinking so that we aren’t losing the special talents of people who might contribute greatly to research and development by offering unique perspectives.

Calling All Thinkers | The Scientist Magazine®.

This is a good read. I do wish it included an image, though, beyond a photo of the author’s book cover. I often use images to try to make a similar point — that the question of impact is really a question of looking at different aspects of research, for instance. Here’s my go-to image for that claim:

Jastrow's Duck-Rabbit

It’s a simple point, but figuring out how to make it is difficult. I’m actually a big fan of the idea of involving the body somehow. I think this sort of perceptual shift is connected with our kinesthetic sense — it’s something we have to experience or feel.

If you care to get a sense of how I think, it’s shown by the ‘fact’ that I think these observations, above, are connected to both the Humboldtian idea of linking research and teaching and to my push to extend our thinking about altmetrics well beyond article-level metrics of even many different types of scholarly communication.

Altmetrics for “What Is Interdisciplinary Communication?”

Here is a link to the Altmetric Report for my recently published article “What Is Interdisciplinary Communication? Reflections on the Very Idea of Disciplinary Integration,” Synthese 190 (11): 1865-1879. DOI:10.1007/s11229-012-0179-7. There is also a preprint of the article available here.

Highlights of the Altmetric Report:

Compared to all articles in Synthese

So far Altmetric has tracked 78 articles from this journal. They typically receive a little less attention than average, with a mean score of 2.7 vs the global average of 3.6. This article has done particularly well, scoring higher than 99% of its peers. It’s actually the highest scoring article in this journal that we’ve seen so far.

In the
99%ile
Ranks
1st
All articles of a similar age

Older articles will score higher simply because they’ve had more time to accumulate mentions. To account for age we can compare this score to the 63,346 tracked articles that were published within six weeks on either side of this one in any journal. This article has done particularly well, scoring higher than 94% of its contemporaries.

In the
94%ile
Other articles of a similar age in Synthese

We’re also able to compare this article to 7 articles from the same journal and published within six weeks on either side of this one. This article has scored higher than all of them.

In the
99%ile
Ranks
1st
All articles

More generally, Altmetric has tracked 1,275,993 articles across all journals so far. Compared to these this article has done particularly well and is in the 96th percentile: it’s in the top 5% of all articles ever tracked by Altmetric.

In the
96%ile

Percentiles and ranks can obviously change with new publications. I also wonder whether one’s Altmetric score is not actually more a measure of one’s social media influence than it is a measure of the buzz surrounding an article — or maybe the two reduce to the same thing. But I sure like the sound of a number 1 ranking!

Nigel Warburton’s negative vision of what philosophy isn’t

Philosopher Nigel Warburton, of philosophy bites fame, has just resigned his academic post at the Open University to pursue other opportunities. The Philosopher’s Magazine conducts an extended interview with Warburton here. Much of what he reveals in this interview is both entertaining and, in my opinion, true.

But one aspect of the interview especially caught my attention. After offering several criticisms of academic philosophy today with which I’m in total agreement (in particular the tendency of hiring committees to hire clones of themselves rather than enhancing the diversity of the department), Warburton offers what he seems to view as the ultimate take down of academic philosophy. I quote this section in full, below. If you’ve been paying any attention to this blog or our posts at CSID, you’ll understand why, immediately.

He reserves particular venom for the REF, the Research Excellence Framework, a system of expert review which assesses research undertaken in UK higher education, which is then used to allocate future rounds of funding. A lot of it turns on the importance of research having a social, economic or cultural impact. It’s not exactly the sort of thing that philosophical reflection on, say, the nature of being qua being is likely to have. He leans into my recorder to make sure I get every word:

“One of the most disturbing things about academic philosophy today is the way that so many supposed gadflies and rebels in philosophy have just rolled over in the face of the REF – particularly by going along with the idea of measuring and quantifying impact,” he says, making inverted commas with his fingers, “a technical notion which was constructed for completely different disciplines. I’m not even sure what research means in philosophy. Philosophers are struggling to find ways of describing what they do as having impact as defined by people who don’t seem to appreciate what sort of things they do. This is absurd. Why are you wasting your time? Why aren’t you standing up and saying philosophy’s not like that? To think that funding in higher education in philosophy is going to be determined partly by people’s creative writing about how they have impact with their work. Just by entering into this you’ve compromised yourself as a philosopher. It’s not the kind of thing that Socrates did or that Hume did or that John Locke did. Locke may have had patrons, but he seemed to write what he thought rather than kowtowing to forces which are pushing on to us a certain vision, a certain view of what philosophical activities should be. Why are you doing this? I’m getting out. For those of you left in, how can you call yourselves philosophers? This isn’t what philosophy’s about.”

Please tell us how you really feel, Dr. Warburton.

In the US, we are not subject to the REF. But we are subject to many, many managerial requirements, including, if we seek grant funding, the requirement that we account for the impact of our research. We are, of course, ‘free’ to opt out of this sort of requirement simply by not seeking grant funding. Universities in the UK, however, are not ‘free’ to opt out of the REF. So, are the only choices open to ‘real’ philosophers worthy of the name resistance or removing oneself from the university, as Warburton has chosen?

I think not. My colleagues and I recently published an article in which we present a positive vision of academic philosophy today. A key aspect of our position is that the question of impact is itself a philosophical, not merely a technical, problem. Philosophers, in particular, should own impact rather than allowing impact to be imposed on us by outside authorities. The question of impact is a case study in whether the sort of account of freedom as non-domination offered by Pettit can be instantiated in a policy context, in addition to posited in political philosophy.

Being able to see impact as a philosophical question rests on being able to question the idea that the only sort of freedom worth having is freedom from interference. If philosophy matters to more than isolated individuals — even if connected by social media — then we have to realize that any philosophically rich conception of liberty must also include responsibility to others. Our notion of autonomy need not be reduced to the sort of non-interference that can only be guaranteed by separation (of the university from society, as Humboldt advocated, or of the philosopher from the university, as Warburton now suggests). Autonomy must be linked to accountability — and we philosophers should be able to tackle this problem without being called out as non-philosophers by someone who has chosen to opt out of this struggle.

Ross Mounce lays out easy steps towards open scholarship | Impact of Social Sciences

Excellent post with lots of good information here;

Easy steps towards open scholarship | Impact of Social Sciences.

There are some especially good thoughts about preprints.

Ross is right, I think, that using preprints is uncommon in the Humanities. For anyone interested in exploring the idea, I recommend the Social Epistemology Review and Reply Collective. Aside from being one of the few places to publish preprints in the Humanities, the SERRC preprints section also allows for extended responses to posted preprints, such as this one. The one major drawback (as Ross points out about sites such as Academia.edu) is that the SERRC doesn’t really archive preprints in the way that, say, a library would. Of course, if you happen to have an institutional repository, you can use that, as well.

Another site worth mentioning in this context is peerevaluation.org. I posted the same preprint on my page there. There are two interesting features of the peerevaluation.org site. One is that it uses interesting metrics, such as the ‘trust’ function. Similar to Facebook ‘likes’, but much richer, the ‘trust’ function allows users to build a visible reputation as a ‘trusted’ reviewer. What’s that, you ask? As a reviewer? Yes, and this is the second interesting feature of peerevaluation.org. It allows one to request reviews of posted papers. It also keeps track of who reviewed what. In theory, this could allow for something like ‘bottom-up’ peer review by genuine peers. One drawback of peerevaluation.org is that not enough people actually participate as reviewers. I encourage you to visit the site and serve as a reviewer to explore the possibilities.

As a humanist who would like to take advantage of preprints, both to improve my own work and for the citation advantage Ross notes, it’s difficult not to envy the situation in Physics and related areas (with arxiv). But how does such a tradition start? There are places one can use to publish preprints in the humanities. We need to start using them.

Quick thoughts on Challenges of Measuring Social Impact Using Altmetrics

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

This and other claims worth discussing appear in this hot-off-the-presses (do we need another metaphor now?) article from Mike Taylor (@herrison):

The Challenges of Measuring Social Impact Using Altmetrics – Research Trends.

In response to the quote above, my own proposal would be to incorporate altmetrics into an overall narrative of impact. In other words, rather than have something like a ‘separate’ altmetric report, I’d rather have a way of appealing to altmetrics as one form of empirical evidence to back up claims of impact.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

Good point about the difference between ‘social reach’ and ‘social impact’. My suggestion for developing an approach to understanding the link between social reach and social impact would be something like this: social reach provides evidence of a sort of interaction. What’s needed to demonstrate social impact, however, is evidence of behavior change. Even if one cannot establish a direct causal relation between sharing and behavior change, demonstrating that one’s research ‘reached’ someone who then changed her behavior in ways consistent with what one’s paper says would generate a plausible narrative of impact.

 

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

My response to this may be predictable, but here goes anyway. I am all for improving the technology. Using Natural Language Processing, as Taylor suggests a bit later, sounds promising. But I think there’s a fundamental problem with comparing altmetrics to bibliometrics and trying to bring the former up to the standards of rigor of the latter. The problem is that this view privileges technology and technical rigor over judgment. Look, let’s make altmetrics as rigorous as we can. But please, let’s not make the mistake of thinking we’ve got the question of impact resolved once altmetrics have achieved the same sort of methodological rigor as bibliometrics! The question of impact can be answered better with help from technology. But to assume that technology can answer the question on its own (as if it existed independently of human beings, or we from it), is to fall into the trap of the technological fix.