Apparently NSF Grant Applicants Still Allergic To Broader Impacts

Pasco Phronesis

The Consortium of Social Science Associations held its Annual Colloquium on Social And Behavioral Sciences and Public Policy earlier this week.  Amongst the speakers was Acting National Science Foundation (NSF) Director Cora Marrett.* As part of her remarks, she addressed how the Foundation was implementing the Coburn Amendment, which added additional criteria to funding political science research projects through NSF.

The first batch of reviews subject to these new requirements tookplace in early 2013.  In addition to the usual criteria of intellectual merit and broader impacts, the reviewers looked at the ‘most meritorious’ proposals and examined how they contribute to economic development and/or national security.  For the reviews scheduled for early 2014, all three ‘criteria’ will be reviewed at once.

Since researchers don’t like to be told what to do, they aren’t happy.  But Marrett asserts through her remarks that this additional review will not really affect the…

View original post 183 more words

‘Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders – Wired Campus – The Chronicle of Higher Education

“The ‘big’ there is purely marketing,” Mr. Reed said. “This is all fear … This is about you buying big expensive servers and whatnot.”

via 'Big Data' Is Bunk, Obama Campaign's Tech Guru Tells University Leaders – Wired Campus – The Chronicle of Higher Education.

Also funny what he says about his own education ….

PLOS Biology: Expert Failure: Re-evaluating Research Assessment

Do what you can today; help disrupt and redesign the scientific norms around how we assess, search, and filter science.

via PLOS Biology: Expert Failure: Re-evaluating Research Assessment.

You know, I’m generally in favor of this idea — at least of the idea that we ought to redesign our assessment of research (science in the broad sense). But, as one might expect when speaking of design, the devil is in the details. It would be disastrous, for instance, to throw the baby of peer review out with the bathwater of bias.

I touch on the issue of bias in peer review in this article (coauthored with Steven Hrotic). I suggest that attacks on peer review are attacks on one of the biggest safeguards of academic autonomy here (coauthored with Robert Frodeman). On the relation between peer review and the values of autonomy and accountability, see: J. Britt Holbrook (2010). “Peer Review,” in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press: 321-32 and J. Britt Holbrook (2012). “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan. Beijing: People’s Publishing House: 328 – 62. 

Steal This Research Paper! (You Already Paid for It.) | Mother Jones

This is an interesting read on the Open Access movement. Here’s the conclusion, with a quote from Michael Eisen that provides some food for thought.

In the end, his disdain isn’t directed at the publishers who hoard scientific knowledge so much as at his colleagues who let them get away with it. “One of the reasons advances in publishing don’t happen is that people are willing to live with all sorts of crap from journals in order to get the imprimatur the journal title has as a measure of the impact of their work,” Eisen says. “It’s easy to blame Elsevier, right? To think that there’s some big corporation that’s preventing scientists from doing the right thing. It’s just bullshit. Elsevier doesn’t prevent anyone from doing anything. Scientists do this themselves!”

via Steal This Research Paper! (You Already Paid for It.) | Mother Jones.

Coming soon …

Image

– Featuring nearly 200 entirely new entries

– All entries revised and updated

– Plus expanded coverage of engineering topics and global perspectives

– Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents

Two Watersheds for Open Access?

This past week I taught “Two Watersheds,” a chapter from Ivan Illich’s Tools for Conviviality. I got some interesting reactions from my students, most of whom are budding engineers. But that’s not what this post is about.

I do want to talk a bit about Illich’s notion of the two watersheds, however. Illich illustrates the idea with reference to medicine. Illich claims that 1913 marks the first watershed in medicine. This is so because in 1913, one finally had a greater than 50% chance that someone educated in medical school (i.e., a doctor) would be able to prescribe an effective treatment for one’s ailment. At that point, modern medicine had caught up with shamans and witch doctors. It rapidly began to outperform them, however. And people became healthier as a result.

By the mid 1950s, however, something changed. Medicine had begun to treat people as patients, and more and more resources were devoted to extending unhealthy life than to helping keep people healthy or to restoring health. Medicine became an institutionalized bureaucracy rather than a calling. Illich picks (admittedly arbitrarily) 1955 to mark this second watershed.

Illich’s account of the two watersheds in medicine is applicable to other technological developments as well.

A couple of weeks ago, Richard Van Noorden published a piece in Nature the headline of which reads “Half of 2011 papers now free to read.” Van Noorden does a good job of laying out the complexities of this claim (‘free’ is not necessarily equivalent to ‘open access’, the robot used to gather the data may not be accurate, and so on), which was made in a report to the European Commission. But the most interesting question raised in the piece is whether the 50% figure represents a “tipping point” for open access.

The report, which was not peer reviewed, calls the 50% figure for 2011 a “tipping point”, a rhetorical flourish that [Peter] Suber is not sure is justified. “The real tipping point is not a number, but whether scientists make open access a habit,” he says.

I’m guessing that Illich might agree both with the report and with Suber’s criticism, but that he might also disagree with both. But let’s not kid ourselves, here. I’m talking more about myself than I am about Illich — just using his idea of the two watersheds to make a point.

The report simply defines the tipping point as more than 50% of papers available for free. This is close enough to the way Illich defines the first watershed in medicine. So, let’s suppose, for the sake of argument, that what the report claims is true. Then we can say that 2011 marks the first watershed of open access publishing.

What should we expect? There’s a lot of hand wringing from traditional scholarly publishers about what open access will do to their business model (blow it up, basically). But many of the claims that the strongest advocates of open access are making in order to suggest that we ought to make open access a habit will likely come to pass. Research will become more efficient. Non-researchers will be able to read the research without restriction (no subscription required, no paywall encountered). If they can’t understand a piece of research, they’ll be able to sign up for a MOOC offered by Harvard or MIT or Stanford and figure it out. Openness in general will increase, along with scientific and technological (and maybe even artistic and philosophical) literacy.

Yes, for profit scholarly publishers and most colleges and universities will end up in the same boat as the shamans and witch doctors once medicine took over in 1913. But aren’t we better off now than when one had only folk remedies and faith to rely on when one got sick?

Perhaps during this time, after the first watershed and before the second, open access can become a habit for researchers, much like getting regular exercise and eating right became habits after medicine’s first watershed. Illich’s claim is that the good times following the first watershed really are good for most of us … for a while.

Of course, there are exceptions. Shamans and witch doctors had their business models disrupted. Open access is likely to do the same for scholarly publishers. MOOCs may do the same for many universities. But universities and publishers will not go away overnight. In fact, we still have witch doctors these days.

The real question is not whether a number or a behavior marks the tipping point — crossing the first watershed. Nor is the question what scholarly publishers and universities will do if 2011 indeed marks the first watershed of openness. The real question is whether we can design policies for openness that prevent us from reaching the second watershed, when openness goes beyond a healthy habit and becomes a bane. Because once openness becomes an institutionalized bureaucracy, we won’t be talking only about peer reviewed journal articles being openly, easily, and freely accessible to anyone for use and reuse.

Andy Stirling on why the precautionary principle matters | Science | guardian.co.uk

SPRU Professor Andy Stirling is beginning a series in The Guardian on the precautionary principle. Stirling’s first article paints an optimistic picture:

Far from the pessimistic caricature, precaution actually celebrates the full depth and potential for human agency in knowledge and innovation. Blinkered risk assessment ignores both positive and negative implications of uncertainty. Though politically inconvenient for some, precaution simply acknowledges this scope and choice. So, while mistaken rhetorical rejections of precaution add further poison to current political tensions around technology, precaution itself offers an antidote – one that is in the best traditions of rationality. By upholding both scientific rigour and democratic accountability under uncertainty, precaution offers a means to help reconcile these increasingly sundered Enlightenment cultures.

via Why the precautionary principle matters | Andy Stirling | Science | guardian.co.uk.

Stirling’s work on the precautionary principle is some of the best out there, and Adam Briggle and I cite him in our working paper on the topic. I look forward to reading the rest of Stirling’s series. Although I’m a critic of the Enlightenment, I don’t reject it wholesale. In fact, I think rational engagement with the thinkers of the Enlightenment — and some of its most interesting heirs, including Stirling and Steve Fuller, who’s a proponent of proaction over precaution — is important. So, stay tuned for more!

Science, Freedom, and the American Way | iCHSTM 2013 blog

Conferences, lecture tours, exchange programs, textbook translations, and science clubs promoted the idea that science functions best without government oversight. More than a vague postwar ideology, this was official U.S. policy, both at home and abroad. Of course, in reality, U.S. investments in applied R&D, particularly for military applications, dwarfed funding for basic research by several orders of magnitude, but this fact did not deter American science attachés, State Department science advisors, embassy officials, and other low-level diplomats from actively promoting a vision of science that stressed independent, undirected scientific research.

But with the end of the Cold War, scientific self-governance no longer packs the same ideological punch. Appeals to scientific freedom are comfortable and familiar, but they’re not going to save the NSF.

via Science, Freedom, and the American Way | iCHSTM 2013 blog.

Snowflake Indicators | Postmodern Research Evaluation | Part 5 of ?

No two snowflakes are alike. No two people are the same.

                                                                                                   — Horshack

Image

                                                                Snowflakes by Juliancolton2 on flickr

Earlier posts in this series attempted to lay out the ways in which Snowball Metrics present as a totalizing grand narrative of research evaluation. Along with attempting to establish a “recipe” that anyone can follow — or that everyone must follow? — in order to evaluate research, this grand narrative appeals to the fact that it is based on a consensus in order to indicate that it is actually fair.

The contrast is between ‘us’ deciding on such a recipe ourselves or having such a recipe imposed on ‘us’ from the outside. ‘We’ decided on the Snowball Metrics recipe based on a consultative method. Everything is on the up and up. Something similar seems to be in the works regarding the use of altmetrics. Personally, I have my doubts about the advisability of standardizing altmetrics.

— But what’s the alternative to using a consultative method to arrive at agreed upon standards for measuring research impact? I mean, it’s either that, or anarchy, or imposition from outside — right?! We don’t want to have standards imposed on us, and we can’t allow anarchy, so ….

Yes, yes, QED. I get it — really, I do. And I don’t have a problem with getting together to talk about things. But must that conversation be methodized? And do we have to reach a consensus?

— Without consensus, it’ll be anarchy!

I don’t think so. I think there’s another alternative we’re not considering. And no, it’s not imposition of standards on us from the ‘outside’ that I’m advocating, either. I think there’s a fourth alternative.

SNOWFLAKE INDICATORS

In contrast to Snowball Metrics, Snowflake Indicators are a delicate combination of science and art (as is cooking, for that matter — something that ought not necessarily involve following a recipe, either! Just a hint for some of those chefs in The Scholarly Kitchen, which sometimes has a tendency to resemble America’s Test Kitchen — a show I watch, along with others, but not so I can simply follow the recipes.). Snowflake Indicators also respect individuality. The point is not to mash the snowflakes together — following the 6-step recipe, of course — to form the perfect snowball. Instead, the point is to let the individual researcher appear as such. In this sense, Snowflake Indicators provide answers to the question of researcher identity. ORCID gets this point, I think.

To say that Snowflake Indicators answer the question of researcher identity is not to suggest that researchers ought to be seen as isolated individuals, however. Who we are is revealed in communication with each other. I really like that Andy Miah’s CV includes a section that lists places in which his work is cited as “an indication of my peer community.” This would count as a Snowflake Indicator.

Altmetrics might also do the trick, depending on how they’re used. Personally, I find it useful to see who is paying attention to what I write or say. The sort of information provided by Altmetric.com at the article level is great. It gives some indication of the buzz surrounding an article, and provides another sort of indicator of one’s peer community. That helps an individual researcher learn more about her audience — something that helps communication, and thus helps a researcher establish her identity. Being able to use ImpactStory.org to craft a narrative of one’s impact — and it’s especially useful not to be tied down to a DOI sometimes — is also incredibly revealing. Used by an individual researcher to craft a narrative of her research, altmetrics also count as Snowflake Indicators.

So, what distinguishes a Snowflake Indicator from a Snowball Metric? It’s tempting to say that it’s the level of measurement. Snowball Metrics are intended for evaluation at a department or university-wide level, or perhaps even at a higher level of aggregation, rather than for the evaluation of individual researchers. Snowflake Indicators, at least in the way I’ve described them above, seem to be aimed at the level of the individual researcher, or even at individual articles. I think there’s something to that, though I also think it might be possible to aggregate Snowflake Indicators in ways that respect idiosyncrasies but that would still allow for meaningful evaluation (more on that in a future post — but for a hint, contrast this advice on making snowballs, where humor and fun make a real difference, with the 6-step process linked above).

But I think that difference in scale misses the really important difference. Where Snowball Metrics aim to make us all comparable, Snowflake Indicators aim to point out the ways in which we are unique — or at least special. Research evaluation, in part, should be about making researchers aware of their own impacts. Research evaluation shouldn’t be punitive, it should be instructive — or at least an opportunity to learn. Research evaluation shouldn’t so much seek to steer research as it should empower researchers to drive their research along the road to impact. Although everyone likes big changes (as long as they’re positive), local impacts should be valued as world-changing, too. Diversity of approaches should also be valued. Any approach to research evaluation that insists we all need to do the same thing is way off track, in my opinion.

I apologize to anyone who was expecting a slick account that lays out the recipe for Snowflake Indicators. I’m not trying to establish rules here. Nor am I insisting that anything goes (there are no rules).  If anything, I am engaged in rule-seeking — something as difficult to grasp and hold on to as a snowflake.

NISO to Develop Standards and Recommended Practices for Altmetrics – National Information Standards Organization

Can we talk about this? Or if I suggest standards are a double-edged sword, will no one listen?

“For altmetrics to move out of its current pilot and proof-of-concept phase, the community must begin coalescing around a suite of commonly understood definitions, calculations, and data sharing practices,” states Todd Carpenter, NISO Executive Director. “Organizations and researchers wanting to apply these metrics need to adequately understand them, ensure their consistent application and meaning across the community, and have methods for auditing their accuracy. We must agree on what gets measured, what the criteria are for assessing the quality of the measures, at what granularity these metrics are compiled and analyzed, how long a period the altmetrics should cover, the role of social media in altmetrics, the technical infrastructure necessary to exchange this data, and which new altmetrics will prove most valuable. The creation of altmetrics standards and best practices will facilitate the community trust in altmetrics, which will be a requirement for any broad-based acceptance, and will ensure that these altmetrics can be accurately compared and exchanged across publishers and platforms.”

“Sensible, community-informed, discipline-sensitive standards and practices are essential if altmetrics are to play a serious role in the evaluation of research,” says Joshua M. Greenberg, Director of the Alfred P. Sloan Foundation’s Digital Information Technology program. “With its long history of crafting just such standards, NISO is uniquely positioned to help take altmetrics to the next level.”

NISO to Develop Standards and Recommended Practices for Altmetrics – National Information Standards Organization.

The post on Snowflake Indicators is coming …