What does it mean to prepare for life in ‘Humanity 2.0’?

Francis Rememdios has organized a session at the 4S Annual Meeting in which he, David Budtz Pedersen, and I will serve as critics of Steve Fuller’s book Preparing for Life in Humanity 2.0. We’ll be live tweeting as much as possible during the session, using the hashtag #humanity2 for those who want to follow. There is also a more general #4s2013 that should be interesting to follow for the next few days.

Here are the abstracts for our talks:

Humanity 2.0, Synthetic Biology, and Risk Assessment

Francis Remedios, Social Epistemology Editorial Board member

As a follow-up to Fuller’s Humanity 2.0, which is concerned with the impact of biosciences and nanosciences on humanity, Preparing for Life in Humanity 2.0 provides a more detailed analysis. Possible futures are discussed are: the ecological, the biomedical and the cybernetic. In the Proactionary Imperative, Fuller and Lipinska aver that for the human condition, the proactionary principle, which is risk taking, is an essential part should be favored over the precautionary principle, which is risk aversion. In terms of policy and ethics, which version of risk assessment should be used for synthetic biology, a branch of biotechnology? With synthetic biology, life is created from inanimate material. Synthetic biology has been dubbed life 2.0. Should one principle be favored over the other?

The Political Epistemology of Humanity 2.0
David Budtz Pedersen, Center for Semiotics, Aarhus University
In this paper I confront Fuller’s conception of Humanity 2.0 with the techno-democratic theories of Fukuyama (2003) and Rawls (1999). What happens to democratic values such as inclusion, rule of law, equality and fairness in an age of technology intensive output-based policymaking? Traditional models of input democracy are based on the moral intuition that the unintended consequences of natural selection are undeserved and call for social redress and compensation. However, in humanity 2.0 these unintended consequences are turned into intended as an effect of bioengineering and biomedical intervention. This, I argue, leads to an erosion of the natural luck paradigm on which  standard theories of distributive justice rest. Hence, people can no longer be expected to recognize each other as natural equals. Now compare this claim to Fuller’s idea that the welfare state needs to reassure the collectivization of burdens and benefits of radical scientific experimentation. Even if this might energize the welfare system and deliver a new momentum to the welfare state in an age of demographic change, it is not clear on which basis this political disposition for collectivizing such scientific benefits rests. In short, it seems implausible that the new techno-elites, that has translated the unintended consequence of natural selection into intended, will be convinced of distributing the benefits of scientific experiments to the wider society. If the biosubstrate of the political elite is radically different in terms of intelligence, life expectancy, bodily performance etc. than those disabled, it is no longer clear what the basis of redistribution and fairness should be. Hence, I argue that important elements of traditional democracy are still robust and necessary to vouch for the legitimacy of humanity 2.0.
Fuller’s Categorical Imperative: The Will to Proaction
J. Britt Holbrook, Georgia Institute of Technology
Two 19th century philosophers – William James and Friedrich Nietzsche – and one on the border of the 18th and 19th centuries – Immanuel Kant – underlie Fuller’s support for the proactionary imperative as a guide to life in ‘Humanity 2.0’. I make reference to the thought of these thinkers (James’s will to believe, Nietzsche’s will to power, and Kant’s categorical imperative) in my critique of Fuller’s will to proaction. First, I argue that, despite a superficial resemblance, James’s view about the risk of uncertainty does not map well onto the proactionary principle. Second, however, I argue that James’s notion that our epistemological preferences reveal something about our ‘passional nature’ connects with Nietzsche’s idea of the will to power in a way that allows us to diagnose Fuller’s ‘moral entrepreneur’ as revelatory of Fuller’s own  ‘categorical imperative’. But my larger critique rests on the connection between Fuller’s thinking and that of Wilhelm von Humboldt. I argue that Fuller accepts not only Humboldt’s ideas about the integration of research and education, but also – and this is the main weakness of Fuller’s position – Humboldt’s lesser recognized thesis about the relation between knowledge and society. Humboldt defends the pursuit of knowledge for its own sake on the grounds that this is necessary to benefit society. I criticize this view and argue that Fuller’s account of the public intellectual as an agent of distributive justice is inadequate to escape the critique of the pursuit of knowledge for its own sake.

Developing Metrics for the Evaluation of Individual Researchers – Should Bibliometricians Be Left to Their Own Devices?

So, I am sorry to have missed most of the Atlanta Conference on Science and Innovation Policy. On the other hand, I wouldn’t trade my involvement with the AAAS Committee on Scientific Freedom and Responsibility for any other academic opportunity. I love the CSFR meetings, and I think we may even be able to make a difference occasionally. I always leave the meetings energized and thinking about what I can do next.

That said, I am really happy to be on my way back to the ATL to participate in the last day of the Atlanta Conference. Ismael Rafols asked me to participate in a roundtable discussion with Cassidy Sugimoto and him (to be chaired by Diana Hicks). Like I’d say ‘no’ to that invitation!

The topic will be the recent discussions among bibliometricians of the development of metrics for individual researchers. That sounds like a great conversation to me! Of course, when I indicated to Ismael that I was bascially against the idea of bibliometricians coming up with standards for individual-level metrics, Ismael laughed and said the conversation should be interesting.

I’m not going to present a paper; just some thoughts. But I did start writing on the plane. Here’s what I have so far:

Bibliometrics are now increasingly being used in ways that go beyond their design. Bibliometricians are now increasingly asking how they should react to such unintended uses of the tools they developed. The issue of unintended consequences – especially of technologies designed with one purpose in mind, but which can be repurposed – is not new, of course. And bibliometricians have been asking questions – ethical questions, but also policy questions – essentially since the beginning of the development of bibliometrics. If anyone is sensitive to the fact that numbers are not neutral, it is surely the bibliometricians.

This sensitivity to numbers, however, especially when combined with great technical skill and large data sets, can also be a weakness. Bibliometricians are also aware of this phenomenon, though perhaps to a lesser degree than one might like. There are exceptions. The discussion by Paul Wouters, Wolfgang Glänzel, Jochen Gläser, and Ismael Rafols regarding this “urgent debate in bibliometrics,” is one indication of such awareness. Recent sessions at ISSI in Vienna and STI2013 in Berlin on which Wouters et al. report are other indicators that the bibliometrics community feels a sense of urgency, especially with regard to the question of measuring the performance of individual researchers.

That such questions are being raised and discussed by bibliometricians is certainly a positive development. One cannot fault bibliometricians for wanting to take responsibility for the unintended consequences of their own inventions. But one – I would prefer to say ‘we’ – cannot allow this responsibility to be assumed only by members of the bibliometrics community.

It’s not so much that I don’t want to blame them for not having thought through possible other uses of their metrics — holding them to what Carl Mitcham calls a duty plus respicare: to take more into account than the purpose for which something was initially designed. It’s that I don’t want to leave it to them to try to fix things. Bibliometricians, after all, are a disciplinary community. They have standards; but I worry they also think their standards ought to be the standards. That’s the same sort of naivety that got us in this mess in the first place.

Look, if you’re going to invent a device someone else can command (deans and provosts with research evaluation metrics are like teenagers driving cars), you ought at least to have thought about how those others might use it in ways you didn’t intend. But since you didn’t, don’t try to come in now with your standards as if you know best.

Bibliometrics are not the province of bibliometricians anymore. They’re part of academe. And we academics need to take ownership of them. We shouldn’t let administrators drive in our neighborhoods without some sort of oversight. We should learn to drive ourselves so we can determine the rules of the road. If the bibliometricians want to help, that’s cool. But I am not going to let the Fordists figure out academe for me.

With the development of individual level bibliometrics, we now have the ability — and the interest — to own our own metrics. What we want to avoid at all costs is having metrics take over our world so that they end up steering us rather than us driving them. We don’t want what’s happened with the car to happen with bibliometrics. What we want is to stop at the level at which bibliometrics of individual researchers maximize the power and creativity of individual researchers. Once we standardize metrics, it makes it that much easier to institutionalize them.

It’s not metrics themselves that we academics should resist. ‘Impact’ is a great opportunity, if we own it. But by all means, we should resist the institutionalization of standardized metrics. A first step is to resist their standardization.

Update

I have accepted an offer to become Visiting Assistant Professor at the Georgia Institute of Technology. I am thrilled to return to Atlanta for a year. I am also thrilled to join the School of Public Policy at Georgia Tech.

I expect to resume posting more regularly after the move. But I’ll pat myself on the back for not having missed a PhyloPic Phryday … yet!

More soon from the ATL … Hotlanta … no more Texas T.

Open Access and Its Enemies, Redux

I don’t have time to be doing this, but it’s important. Making time is a state of mind — as, claims Cameron Neylon, is ‘Open’:

Being open as opposed to making open resources (or making resources open) is about embracing a particular form of humility. For the creator it is about embracing the idea that – despite knowing more about what you have done than any other person –  the use and application of your work is something that you cannot predict.

There’s a lot to unpack, even in this short excerpt from Neylon’s post. Whether, for instance, the idea of ‘humility’ is captured by being open to unintended applications of ones work — surely that’s part, but only part, of being open — deserves further thought. But I really do think Cameron is on to something with the idea that being open entails a sort of humility.

To see how, it’s instructive to read through Robin Osborne’s post on ‘Why open access makes no sense‘:

For those who wish to have access, there is an admission cost: they must invest in the education prerequisite to enable them to understand the language used. Current publication practices work to ensure that the entry threshold for understanding my language is as low as possible. Open access will raise that entry threshold. Much more will be downloaded; much less will be understood.

There’s a lot to unpack here, as well.  There’s a sort of jiujitsu going on in this excerpt that requires that one is at least familiar with — if it is not one’s characteristic feeling — the feeling that no one will ever understand. What is obvious, however, is Osborne’s arrogance: there is a price to be paid to understand me, and open access will actually raise that price.

In my original talk on “Open Access and Its Enemies” I traced one source of disagreement about open access to different conceptions of freedom. Those with a negative concept of freedom are opposed to any sort of open access mandates, for instance, while those appealing to a positive concept of freedom might accept certain mandates as not necessarily opposed to their freedom. There may be exceptions, of course, but those with a positive concept of freedom tend to accept open access, while those with a negative view of freedom tend to oppose it. The two posts from Neylon and Osborne reveal another aspect of what divides academics on the question of open access — a different sense of self.

For advocates of humility, seeing our selves as individuals interferes with openness. In fact, it is only in contrast to those who view the self as an individual that the appeal to humility makes sense. The plea is that they temper their individualistic tendencies, to humble their individual selves in the service of our corporate self.   For advocates of openness, the self is something that really comes about only through interaction with others.

Advocates of elitism acknowledge that the social bond is important. But it is not, in itself, constitutive of the self. On the contrary, the self is what persists independently of others, whether anyone else understands us or not. Moreover, understanding me — qua individual — requires that you — qua individual — discipline yourself, learn something, be educated. Indeed, to become a self in good standing with the elite requires a certain self-abnegation — but only for a time, and only until one can re-assert oneself as an elite individual. Importantly, self-abnegation is a temporary stop on the way to full self-realization.

Self-sacrifice is foreign to both of the advocate of humility and the advocate of elitism, I fear. Yet it is only through self-sacrifice that communication is possible. Self-sacrifice doesn’t dissolve the individual self completely into the corporate self. Nor does self-sacrifice recognize temporary self-abnegation on the road to self-assertion as the path to communication. Self-sacrifice takes us beyond both, in that it requires that we admit that content is never what’s communicated. A self with a truly open mindset would have to be able to experience this.  Alas, no one will ever understand me!

 

The digital scholar

I couldn’t sleep last night, so I opened up my computer. That’s guaranteed not to help me sleep, of course. But the work of a digital scholar is never done.

I checked Twitter while my Outlook inbox was updating. It’s interesting how these two digital tools work in concert. Twitter is ephemeral and invites quick scans. Email, it turns out, slows me down. And, since email also takes longer to load, I usually start my day checking Twitter first.

Today, I hit on a tweet by Mark Carrigan (@mark_carrigan) to this post on the Sociological Imagination blog. It’s worth reading in its own right, but it also led me to look up Martin Weller’s (@mweller) book The Digital Scholar, which is available to read free here, and which is related to this blog. I’ve just started to read it, but there’s something approaching a phenomenology of digital scholarship going on there. I’ll be interested to compare it with Kathleen Fitzpatrick’s (@kfitz) Planned Obsolescence, which also has an associated blog. I wonder how much each of them are having similar thoughts to mine. It’s interesting to be able to discover community in the digital realm. I even doubt that the ease of communication these days (in the sense of Bataille’s ‘weak communication’) interferes with the sort of communication (in the sense of Bataille’s ‘strong communication’) that makes community possible.

As I was flitting back and forth between email, this post, and twitter, Mark Carrigan tweeted something about the difference between blogs and physical notebooks. I think he’s right that there’s a difference. I also think one can still use blogs much as one used notebooks in the past. I’m doing so here. But I’ll also publish this post so others can add their thoughts to mine.

Should we develop an alt-H-index? | Postmodern Research Evaluation | 4 of ?

In the last post in this series, I promised to present an alternative to Snowball Metrics — something I playfully referred to as ‘Snowflake Indicators’ in an effort to distinguish what I am proposing from the grand narrative presented by Snowball Metrics. But two recent developments have sparked a related thought that I want to pursue here first.

This morning, a post on the BMJ blog asks the question: Who will be the Google of altmetrics? The suggestion that we should have such an entity comes from Jason Priem, of course. He’s part of the altmetrics avant garde, and I always find what he has to say on the topic provocative. The BMJ blog post is also worth reading to get the lay of the land regarding the leaders of the altmetrics push.

Last Friday, the editors of the LSE Impact of Social Sciences blog contacted me and asked whether they might replace our messy ’56 indicators of impact’ with a cleaned-up and clarified version. I asked them to add it in, without simply replacing our messy version with their clean version, and they agreed. You can see the updated post here. I’ll come back to this later in more detail. For now, I want to ask a different, though related, question.

COULD WE DEVELOP AN ALT-H-INDEX?

The H-index is meant to be a measure of the productivity and impact of an individual scholar’s research on other researchers, though recently I’ve seen it applied to journals. But the original idea is to find the number of a researcher’s publications that have been cited at least X times. Of course, the actual number of one’s H-index will vary based on the citation data-base one is using. According to Scopus, for instance, my H-index is 4. A quick look at my Researcher ID and it’s easy enough to see that my H-index would be 1. Then, if we look at Google Scholar, we see that my H-index is 6. Differences such as these — and the related question of the value of such metrics as the H-index — are the subject of research being performed now by Kelli Barr (one of our excellent UNT/CSID graduate students).

Now, if it’s clear enough how the H-index is generated … well, let’s move on for the moment.

How would an alt-H-index be generated?

There are a several alternatives here. But let’s pursue the one that’s most parallel to the way the H-index is generated. So, let’s substitute products for articles and mentions for citations. One’s alt-H-index would then be the number of products P that have at least P mentions on things tracked by altmetricians.

I don’t have time at the moment to calculate my full alt-H-index. But let’s go with some things I have been tracking: my recent correspondence piece in Nature, the most recent LSE Impact of Social Sciences blog post (linked above), and my recently published article in Synthese on “What Is Interdisciplinary Communication?” [Of course, limiting myself to 3 products would mean that my alt-H-index couldn’t go above 3 for the purposes of this illustration.]

According to Impact Story, the correspondence piece in Nature has received  41 mentions (26 tweets, 6 Mendeley readers, and 9 CiteULike bookmarks). The LSE blog post has received 114 mentions (113 tweets and 1 bookmark). And the Synthese paper has received 5 (5 tweets). So, my alt-H-index would be 3, according to Impact Story.

According to Altmetric, the Nature correspondence has received 125 mentions (96 tweets, 9 Facebook posts/shares, 3 Google+ shares, blogged by 11, and 6 CiteULike bookmarks), the LSE Blog post cannot be measured, and the Synthese article has 11 mentions (3 tweets, 3 blogs, 1 Google+, 2 Mendeley, and 2 CiteULike). So, my alt-H-index would be 2, according to Altmetric data.

Comparing H-index and alt-H-index

So, as I note above, I’ve limited the calculations of my alt-h-index to three products. I have little doubt that my alt-h-index is considerably higher than my h-index — and would be so for most researchers who are active on social media and who publish in alt-academic venues, such as scholarly blogs (or, if you’re really cool like my colleague Adam Briggle, in Slate), or for fringe academics, such as my colleague  Keith Brown, who typically publishes almost exclusively in non-scholarly venues.

This illustrates a key difference between altmetrics and traditional bibliometrics. Altmetrics are considerably faster than traditional bibliometrics. It takes a long time for one’s H-index to go up. ‘Older’ researchers typically have higher H-indices than ‘younger’ researchers. I suspect that ‘younger’ researchers may well have higher alt-H-indices, since ‘younger’ researchers tend to be more active on social media and more prone to publish in the sorts of alt-academic venues mentioned above.

But there are also some interesting similarities. First, it makes a difference where you get your data. My H-index is 4, 1, or 6, depending on whether we use data from Scopus, Web of Science, or Google Scholar. My incomplete alt-H-index is either 3 or 2, depending on whether we use data from Impact Story or Altmetric. An interesting side note that ties in with the question of the Google of altmetrics is that the reason for the difference in my alt-H-index when using data from Impact Story and Altmetric is that Altmetric requires a DOI. With Impact Story, you can import URLs, which makes it considerably more flexible for certain products. In that respect, at least, Impact Story is more like Google Scholar — it covers more — whereas Altmetric is more like Scopus. That’s a sweeping generalization, but I think it’s basically right, in this one respect.

But these differences raise the more fundamental question, and one that serves as the beginning of a response to the update of my LSE Impact of Social Sciences blog piece:

SHOULD WE DEVELOP AN ALT-H-INDEX?

It’s easy enough to do it. But should we? Asking this question means exploring some of the larger ramifications of metrics in general — the point of my LSE Impact post. If we return to that post now, I think it becomes obvious why I wanted to keep our messy list of indicators alongside the ‘clarified’ list. The LSE-modified list divides our 56 indicators into two lists: one of ’50 indicators of positive impact’ and another of ‘6 more ambiguous indicators of impact’. Note that H-index is included on the ‘indicators of positive impact’ list. That there is a clear boundary between ‘indicators of positive impact’ and ‘more ambiguous indicators of impact’ — or ‘negative metrics’ as the Nature editors suggested — is precisely the sort of thinking our messy list of 56 indicators is meant to undermine.

H-index is ambiguous. It embodies all sorts of value judgments. It’s not a simple matter of working out the formula. The numbers that go into the formula will differ, depending on the data source used (Scopus, Web of Science, or Google Scholar), and these data also depend on value judgments. Metrics tend to be interpreted as objective. But we really need to reexamine what we mean by this. Altmetrics are the same as traditional bibliometrics in this sense — all metrics rest on prior value judgments.

As we note at the beginning of our Nature piece, articles may be cited for ‘positive’ or ‘negative’ reasons. More citations do not always mean a more ‘positive’ reception for one’s research. Similarly, a higher H-index does not always mean that one’s research has been more ‘positively’ received by peers. The simplest thing it means is that one has been at it longer. But even that is not necessarily the case. Similarly, a higher alt-H-index probably means that one has more social media influence — which, we must realize, is ambiguous. It’s not difficult to imagine that quite a few ‘more established’ or more traditional researchers could interpret a higher alt-H-index as indicating a lack of serious scholarly impact.

Here, then, is the bottom line: there are no unambiguously positive indicators of impact!

I will, I promise, propose my Snowflake Indicators framework as soon as possible.

Other infrequently asked questions about impact

Here are some other infrequently asked questions about impact that didn’t make it into the final cut of my piece at the LSE Impact of Social Sciences Blog.

Why conflate impact with benefit?

Put differently, why assume that all impacts are positive or benefits to society? Obviously, no one wants publicly supported research not to benefit the public. It’s even less palatable to consider that some publicly supported research may actually harm the public. But it’s wishful thinking to assume that all impacts are beneficial. Some impacts that initially appear beneficial may have negative consequences. And seemingly negative indicators might actually show that one is having an impact – even a positive one. I discuss this point with reference to Jeffrey Beall, recently threatened with a $1 billion lawsuit, here.

The question of impact is an opportunity to discuss such issues, rather than retreating into the shelter of imagined value-neutrality or objectivity. It was to spark this discussion that we generated a CSID-specific list – it is purposely idiosyncratic.

How can we maximize our impact?

I grant that ‘How can we maximize our impact?’ is a logistical question; but it incorporates a healthy dose of logos. Asking how to maximize our impacts should appeal to academics. We may be choosey about the sort of impact we desire and on whom; but no one wants to have minimal impact. We all desire to have as much impact as possible. Or, if we don’t, please get another job and let some of us who do want to make a difference have yours.

Wherefore impact?

For what reason are we concerned with the impact of scholarly communication? It’s the most fundamental question we should be asking and answering. We need to be mindful that whatever metrics we devise will have a steering effect on the course of scholarly communications. If we are going to steer scholarly communications, then we should discuss where we plan to go – and where others might steer us.