Coming soon …

Image

– Featuring nearly 200 entirely new entries

– All entries revised and updated

– Plus expanded coverage of engineering topics and global perspectives

– Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents

Two Watersheds for Open Access?

This past week I taught “Two Watersheds,” a chapter from Ivan Illich’s Tools for Conviviality. I got some interesting reactions from my students, most of whom are budding engineers. But that’s not what this post is about.

I do want to talk a bit about Illich’s notion of the two watersheds, however. Illich illustrates the idea with reference to medicine. Illich claims that 1913 marks the first watershed in medicine. This is so because in 1913, one finally had a greater than 50% chance that someone educated in medical school (i.e., a doctor) would be able to prescribe an effective treatment for one’s ailment. At that point, modern medicine had caught up with shamans and witch doctors. It rapidly began to outperform them, however. And people became healthier as a result.

By the mid 1950s, however, something changed. Medicine had begun to treat people as patients, and more and more resources were devoted to extending unhealthy life than to helping keep people healthy or to restoring health. Medicine became an institutionalized bureaucracy rather than a calling. Illich picks (admittedly arbitrarily) 1955 to mark this second watershed.

Illich’s account of the two watersheds in medicine is applicable to other technological developments as well.

A couple of weeks ago, Richard Van Noorden published a piece in Nature the headline of which reads “Half of 2011 papers now free to read.” Van Noorden does a good job of laying out the complexities of this claim (‘free’ is not necessarily equivalent to ‘open access’, the robot used to gather the data may not be accurate, and so on), which was made in a report to the European Commission. But the most interesting question raised in the piece is whether the 50% figure represents a “tipping point” for open access.

The report, which was not peer reviewed, calls the 50% figure for 2011 a “tipping point”, a rhetorical flourish that [Peter] Suber is not sure is justified. “The real tipping point is not a number, but whether scientists make open access a habit,” he says.

I’m guessing that Illich might agree both with the report and with Suber’s criticism, but that he might also disagree with both. But let’s not kid ourselves, here. I’m talking more about myself than I am about Illich — just using his idea of the two watersheds to make a point.

The report simply defines the tipping point as more than 50% of papers available for free. This is close enough to the way Illich defines the first watershed in medicine. So, let’s suppose, for the sake of argument, that what the report claims is true. Then we can say that 2011 marks the first watershed of open access publishing.

What should we expect? There’s a lot of hand wringing from traditional scholarly publishers about what open access will do to their business model (blow it up, basically). But many of the claims that the strongest advocates of open access are making in order to suggest that we ought to make open access a habit will likely come to pass. Research will become more efficient. Non-researchers will be able to read the research without restriction (no subscription required, no paywall encountered). If they can’t understand a piece of research, they’ll be able to sign up for a MOOC offered by Harvard or MIT or Stanford and figure it out. Openness in general will increase, along with scientific and technological (and maybe even artistic and philosophical) literacy.

Yes, for profit scholarly publishers and most colleges and universities will end up in the same boat as the shamans and witch doctors once medicine took over in 1913. But aren’t we better off now than when one had only folk remedies and faith to rely on when one got sick?

Perhaps during this time, after the first watershed and before the second, open access can become a habit for researchers, much like getting regular exercise and eating right became habits after medicine’s first watershed. Illich’s claim is that the good times following the first watershed really are good for most of us … for a while.

Of course, there are exceptions. Shamans and witch doctors had their business models disrupted. Open access is likely to do the same for scholarly publishers. MOOCs may do the same for many universities. But universities and publishers will not go away overnight. In fact, we still have witch doctors these days.

The real question is not whether a number or a behavior marks the tipping point — crossing the first watershed. Nor is the question what scholarly publishers and universities will do if 2011 indeed marks the first watershed of openness. The real question is whether we can design policies for openness that prevent us from reaching the second watershed, when openness goes beyond a healthy habit and becomes a bane. Because once openness becomes an institutionalized bureaucracy, we won’t be talking only about peer reviewed journal articles being openly, easily, and freely accessible to anyone for use and reuse.

SPARC Innovator Award | SPARC

There is something very appealing about the simplicity of using a single number to indicate the worth of a scientific paper.

But a growing group of scientists, publishers, funders, and research organizations are increasingly opposed to the broad use of the Journal Impact Factor (JIF) as the sole measure used to assess research and researchers.

 

SPARC Innovator Award | Sparc.

Open Access and Its Enemies, Redux

I don’t have time to be doing this, but it’s important. Making time is a state of mind — as, claims Cameron Neylon, is ‘Open’:

Being open as opposed to making open resources (or making resources open) is about embracing a particular form of humility. For the creator it is about embracing the idea that – despite knowing more about what you have done than any other person –  the use and application of your work is something that you cannot predict.

There’s a lot to unpack, even in this short excerpt from Neylon’s post. Whether, for instance, the idea of ‘humility’ is captured by being open to unintended applications of ones work — surely that’s part, but only part, of being open — deserves further thought. But I really do think Cameron is on to something with the idea that being open entails a sort of humility.

To see how, it’s instructive to read through Robin Osborne’s post on ‘Why open access makes no sense‘:

For those who wish to have access, there is an admission cost: they must invest in the education prerequisite to enable them to understand the language used. Current publication practices work to ensure that the entry threshold for understanding my language is as low as possible. Open access will raise that entry threshold. Much more will be downloaded; much less will be understood.

There’s a lot to unpack here, as well.  There’s a sort of jiujitsu going on in this excerpt that requires that one is at least familiar with — if it is not one’s characteristic feeling — the feeling that no one will ever understand. What is obvious, however, is Osborne’s arrogance: there is a price to be paid to understand me, and open access will actually raise that price.

In my original talk on “Open Access and Its Enemies” I traced one source of disagreement about open access to different conceptions of freedom. Those with a negative concept of freedom are opposed to any sort of open access mandates, for instance, while those appealing to a positive concept of freedom might accept certain mandates as not necessarily opposed to their freedom. There may be exceptions, of course, but those with a positive concept of freedom tend to accept open access, while those with a negative view of freedom tend to oppose it. The two posts from Neylon and Osborne reveal another aspect of what divides academics on the question of open access — a different sense of self.

For advocates of humility, seeing our selves as individuals interferes with openness. In fact, it is only in contrast to those who view the self as an individual that the appeal to humility makes sense. The plea is that they temper their individualistic tendencies, to humble their individual selves in the service of our corporate self.   For advocates of openness, the self is something that really comes about only through interaction with others.

Advocates of elitism acknowledge that the social bond is important. But it is not, in itself, constitutive of the self. On the contrary, the self is what persists independently of others, whether anyone else understands us or not. Moreover, understanding me — qua individual — requires that you — qua individual — discipline yourself, learn something, be educated. Indeed, to become a self in good standing with the elite requires a certain self-abnegation — but only for a time, and only until one can re-assert oneself as an elite individual. Importantly, self-abnegation is a temporary stop on the way to full self-realization.

Self-sacrifice is foreign to both of the advocate of humility and the advocate of elitism, I fear. Yet it is only through self-sacrifice that communication is possible. Self-sacrifice doesn’t dissolve the individual self completely into the corporate self. Nor does self-sacrifice recognize temporary self-abnegation on the road to self-assertion as the path to communication. Self-sacrifice takes us beyond both, in that it requires that we admit that content is never what’s communicated. A self with a truly open mindset would have to be able to experience this.  Alas, no one will ever understand me!

 

Hawk-Eye, stakes, and murder: two sports articles, no binary decisions

I noticed two articles on sports concerned with officials’ decisions today, and their juxtaposition raises more questions than either, alone.

The first was an article in The Guardian questioning Hawk-Eye, the technology used in tennis to determine whether balls landed in our out of bounds, thus usually to determine points won or lost. Anyone who watched the Wimbledon Gentlemen’s Final yesterday witnessed Novak Djokavic yelling “What is going on?!” at the umpire when calls — including challenges that were resolved by appealing to Hawk-Eye — seemed continually to go against him.The article describes the results of a paper that critiques Hawk-Eye’s accuracy and ends with an interesting question:

The paper concludes that Hawk-Eye should be used as an aid to human judgement (their italics), and that, if used with a little more nuance, it could provide added enjoyment of the games involved and public understanding of technology, its uses and its limitations. What do you think? Do you want a simple binary decision in your sports, or would you rather know the accuracy of Hawk-Eye’s output?

I’m old enough to remember a time before Hawk-Eye, and I recall when it was introduced. I remember being skeptical at the time. Why think that a model of what happened would be better than human judgment?  And, really, is the issue which is more accurate? Part of sports is overcoming bad calls. Our collective mania for objectivity borders on madness.

To answer the author’s question at the end of the article: neither. I’d prefer if we took such technologies, including instant replay, completely out of all game-time decisions in sporting events. If leagues want to review officials’ decisions later, after the game has been decided, fine. But this rush to judgment, as if we have to have the ‘objectively’ correct answer right then and there, is a bane to sports.

Do we really enjoy Djokavic’s comparatively mild protests more than John McEnroe’s? You cannot be serious!

On the same page as the Hawk-Eye article is a link to another article in The Guardian, this one on a recent local soccer match in Brazil during which a referee stabbed a player, then was mobbed, stoned, and decapitated by the angry crowd. Say what?! Look, I’m from Alabama, so I’m well aware of people who do really stupid things in the name of sports. As an Auburn fan, I’m glad no one has decapitated Harvey Updyke, yet. But it’s interesting how the story of the Brazilian double murder (yes, the player stabbed by the ref also died) is treated — as an image problem:

Brazil faces mounting pressure to show it is a safe place for tourists before 12 cities host the 2014 World Cup and Rio de Janeiro the Olympics in 2016. The Confederations Cup in June was marked by violence as anti-government protestors angered by the amount of money being spent on the events clashed with police.

So, unless Brazil can clean up its act and tone down the violence to a level that’s acceptable to tourists, the World Cup and the Olympics are in trouble? Someone must be Djoking!

When did we lose all perspective about what’s important? Sports are a form of entertainment, one that’s more entertaining when we take it seriously. But it’s possible to take sports too seriously. Killing someone is an extreme example, obviously. But treating murder as an image problem reveals that we take sports too seriously in other ways, as well. As if the real problem is whether Brazil will respond to mounting pressure to show it is a safe place for tourists in time to save the World Cup. Imagine the economic fall-out were people to stay away in droves! As if that were the problem, rather than the problem being our thinking of sports in economic — or technoscientific — terms. Or our thinking of the protests as a problem for sports, rather than an expression of a cultural moment.

So, to rephrase the question raised by the initial article: Do you want simple, binary decisions rendered by someone — or something — else, or would you rather do the hard work of thinking?

Andy Stirling on why the precautionary principle matters | Science | guardian.co.uk

SPRU Professor Andy Stirling is beginning a series in The Guardian on the precautionary principle. Stirling’s first article paints an optimistic picture:

Far from the pessimistic caricature, precaution actually celebrates the full depth and potential for human agency in knowledge and innovation. Blinkered risk assessment ignores both positive and negative implications of uncertainty. Though politically inconvenient for some, precaution simply acknowledges this scope and choice. So, while mistaken rhetorical rejections of precaution add further poison to current political tensions around technology, precaution itself offers an antidote – one that is in the best traditions of rationality. By upholding both scientific rigour and democratic accountability under uncertainty, precaution offers a means to help reconcile these increasingly sundered Enlightenment cultures.

via Why the precautionary principle matters | Andy Stirling | Science | guardian.co.uk.

Stirling’s work on the precautionary principle is some of the best out there, and Adam Briggle and I cite him in our working paper on the topic. I look forward to reading the rest of Stirling’s series. Although I’m a critic of the Enlightenment, I don’t reject it wholesale. In fact, I think rational engagement with the thinkers of the Enlightenment — and some of its most interesting heirs, including Stirling and Steve Fuller, who’s a proponent of proaction over precaution — is important. So, stay tuned for more!

Snowflake Indicators | Postmodern Research Evaluation | Part 5 of ?

No two snowflakes are alike. No two people are the same.

                                                                                                   — Horshack

Image

                                                                Snowflakes by Juliancolton2 on flickr

Earlier posts in this series attempted to lay out the ways in which Snowball Metrics present as a totalizing grand narrative of research evaluation. Along with attempting to establish a “recipe” that anyone can follow — or that everyone must follow? — in order to evaluate research, this grand narrative appeals to the fact that it is based on a consensus in order to indicate that it is actually fair.

The contrast is between ‘us’ deciding on such a recipe ourselves or having such a recipe imposed on ‘us’ from the outside. ‘We’ decided on the Snowball Metrics recipe based on a consultative method. Everything is on the up and up. Something similar seems to be in the works regarding the use of altmetrics. Personally, I have my doubts about the advisability of standardizing altmetrics.

— But what’s the alternative to using a consultative method to arrive at agreed upon standards for measuring research impact? I mean, it’s either that, or anarchy, or imposition from outside — right?! We don’t want to have standards imposed on us, and we can’t allow anarchy, so ….

Yes, yes, QED. I get it — really, I do. And I don’t have a problem with getting together to talk about things. But must that conversation be methodized? And do we have to reach a consensus?

— Without consensus, it’ll be anarchy!

I don’t think so. I think there’s another alternative we’re not considering. And no, it’s not imposition of standards on us from the ‘outside’ that I’m advocating, either. I think there’s a fourth alternative.

SNOWFLAKE INDICATORS

In contrast to Snowball Metrics, Snowflake Indicators are a delicate combination of science and art (as is cooking, for that matter — something that ought not necessarily involve following a recipe, either! Just a hint for some of those chefs in The Scholarly Kitchen, which sometimes has a tendency to resemble America’s Test Kitchen — a show I watch, along with others, but not so I can simply follow the recipes.). Snowflake Indicators also respect individuality. The point is not to mash the snowflakes together — following the 6-step recipe, of course — to form the perfect snowball. Instead, the point is to let the individual researcher appear as such. In this sense, Snowflake Indicators provide answers to the question of researcher identity. ORCID gets this point, I think.

To say that Snowflake Indicators answer the question of researcher identity is not to suggest that researchers ought to be seen as isolated individuals, however. Who we are is revealed in communication with each other. I really like that Andy Miah’s CV includes a section that lists places in which his work is cited as “an indication of my peer community.” This would count as a Snowflake Indicator.

Altmetrics might also do the trick, depending on how they’re used. Personally, I find it useful to see who is paying attention to what I write or say. The sort of information provided by Altmetric.com at the article level is great. It gives some indication of the buzz surrounding an article, and provides another sort of indicator of one’s peer community. That helps an individual researcher learn more about her audience — something that helps communication, and thus helps a researcher establish her identity. Being able to use ImpactStory.org to craft a narrative of one’s impact — and it’s especially useful not to be tied down to a DOI sometimes — is also incredibly revealing. Used by an individual researcher to craft a narrative of her research, altmetrics also count as Snowflake Indicators.

So, what distinguishes a Snowflake Indicator from a Snowball Metric? It’s tempting to say that it’s the level of measurement. Snowball Metrics are intended for evaluation at a department or university-wide level, or perhaps even at a higher level of aggregation, rather than for the evaluation of individual researchers. Snowflake Indicators, at least in the way I’ve described them above, seem to be aimed at the level of the individual researcher, or even at individual articles. I think there’s something to that, though I also think it might be possible to aggregate Snowflake Indicators in ways that respect idiosyncrasies but that would still allow for meaningful evaluation (more on that in a future post — but for a hint, contrast this advice on making snowballs, where humor and fun make a real difference, with the 6-step process linked above).

But I think that difference in scale misses the really important difference. Where Snowball Metrics aim to make us all comparable, Snowflake Indicators aim to point out the ways in which we are unique — or at least special. Research evaluation, in part, should be about making researchers aware of their own impacts. Research evaluation shouldn’t be punitive, it should be instructive — or at least an opportunity to learn. Research evaluation shouldn’t so much seek to steer research as it should empower researchers to drive their research along the road to impact. Although everyone likes big changes (as long as they’re positive), local impacts should be valued as world-changing, too. Diversity of approaches should also be valued. Any approach to research evaluation that insists we all need to do the same thing is way off track, in my opinion.

I apologize to anyone who was expecting a slick account that lays out the recipe for Snowflake Indicators. I’m not trying to establish rules here. Nor am I insisting that anything goes (there are no rules).  If anything, I am engaged in rule-seeking — something as difficult to grasp and hold on to as a snowflake.