Philosophy and Science Policy: A Report from the Field I

I’m actually going to give a series of reports from the field, including a chapter in a book on Field Philosophy that I’m revising now in light of editor/reviewer comments. In the chapter, I discuss our Comparative Assessment of Peer Review project. For a brief account of Field Philosophy, see the preprint of a manuscript I co-authored with Diana Hicks. That’s also being revised now.

Today, however, I will be focusing on more pressing current events having to do with Plan S. So, I will give a talk at the NJIT Department of Humanities Fall Colloquium Series to try to let my colleagues know what I’ve been up to recently. Here are the slides.

As a philosopher, my approach is not simply to offer an objective ‘report’. Instead, I will be offering an argument.

If we want to encourage academic flourishing, then we need new ways of evaluating academic research. We want to encourage academic flourishing. Therefore, we need new ways of evaluating academic research.

Of course, the argument also refers to my own activities. I want my department to understand and value my forays into the field of science policy. But that will mean revaluing the way I am currently evaluated (which is along fairly standard lines).

Mr. Smits Goes to Washington

News article just published today in Nature.

What’s ‘unethical’ about Plan S?

In a recent blog post, my co-authors and I refer to Plan S as ‘unethical’. Doing so has upset Marc Schiltz, President of Science Europe.

Schiltz claims that disagreeing with some, or even many, aspects of Plan S does not in itself justify calling Plan S ‘unethical’. I completely agree. To justify calling Plan S ‘unethical’ would require more than simply disagreeing with some aspect of Plan S.

What more would be required? Calling Plan S ‘unethical’ would require an argument that shows that Plan S has violated some sort of ethical norm or crossed some sort of ethical line. Insofar as Plan S impinges on academic freedom, it has done just that.

Academic freedom is a contentious topic in and of itself, but particularly so when engaging in discussions about Open Access (OA). Part of the reason for the heightened tension surrounding academic freedom and OA is the perception that for-profit publishers have appealed to academic freedom to pummel OA advocates, portraying them as invaders of academics’ territory and themselves as defenders of academic freedom. As a result, anyone who appeals to academic freedom in an OA discussion runs the risk of being dismissed by OA advocates as an enemy in league with the publishers.

It’s also the case that academic freedom means different things in different contexts. In some countries, such as the UK and Germany, academic freedom is written into laws. In the US, the AAUP is the main source people use to define academic freedom. I’m a philosopher and an ethicist, not a lawyer. I’m also an American working at an American university, so my own conception of academic freedom is influenced by — but not exactly the same as — the AAUP definition. In short, I approach academic freedom as expressing an ethical norm of academia, rather than in terms of a legal framework. No doubt there are good reasons for such laws in different contexts; but academic freedom would be a thing — an ethical thing — even if there were no laws about it.

I won’t rehash the whole argument from our original post here. I direct interested parties to the sections of the blog under the sub-heading, “The problem of violating academic freedom.” If I had it to do over again, I would suggest to my coauthors altering some of the language in that section; but the bottom line remains the same — Plan S violates academic freedom. Insofar as Plan S violates academic freedom, it violates an ethical norm of academia. Hence, Plan S is unethical.

This is not to say that OA is unethical or necessarily violates academic freedom. I have argued in the past that OA need not violate academic freedom. In the recent flurry of discussion of Plan S on Twitter, Peter Suber pointed me to the carefully crafted Harvard OA policy’s answer to the academic freedom question. That policy meticulously avoids violating academic freedom (and would therefore count, for me, as an ethical OA policy).

To say that Plan S is unethical is simply to say that some aspects of it violate academic freedom. Some are an easy fix. Take, for instance, Principle #1.

Authors retain copyright of their publication with no
restrictions. All publications must be published under
an open license, preferably the Creative Commons
Attribution Licence CC BY. In all cases, the license
applied should fulfil the requirements defined by the
Berlin Declaration;

The violation of academic freedom in Principle #1 is contained in the last clause: “In all cases, the license applied should fulfil [sic] the requirements defined by the Berlin Declaration.” Because the Berlin Declaration actually requires an equivalent of the CC-BY license, that clause totally undermines the “preferably” in the previous clause. If Plan S merely expressed a strong preference for CC-BY or the equivalent, but allowed researchers to choose from among more restrictive licenses on a case by case basis, Principle #1 would not violate academic freedom. The simple fix is to remove the last clause of Principle #1.

Other issues are less easily fixed. In particular, I have in mind Schiltz’s Preamble to Plan S. There, Schiltz argues as follows.

We recognise that researchers need to be given a maximum
of freedom to choose the proper venue for publishing
their results and that in some jurisdictions this freedom
may be covered by a legal or constitutional protection.
However, our collective duty of care is for the science system
as a whole, and researchers must realise that they are
doing a gross disservice to the institution of science if they
continue to report their outcomes in publications that will
be locked behind paywalls.

I won’t rehash here the same argument my co-authors and I put forth in our initial blog post. Instead, I have a couple of other things to say here about Schiltz’s position, as expressed in this quote.

First, I have absolutely no objection on academic freedom grounds to making all of my research freely available (gratis) and removing paywalls. I agree that researchers have a duty to make their work freely available, if possible. Insofar as Plan S allows researchers to retain their copyrights and enables gratis OA, it’s a good thing, even an enhancer of academic freedom. The sticking point is mandating a CC-BY or equivalent license, which unethically limits the freedom of academics to choose from a broad range of possible licenses (libre is not a single license, but a range of possible ones). Fix Principle #1, and this particular violation of academic freedom disappears.

Second, there’s a trickier issue concerning individual freedom and group obligations. I discussed the issue in greater detail here. But the crux of the matter is that Schiltz here displays a marked preference for the rights of the group (or even of the impersonal “science system as a whole”) over the rights of individual members of the group. That position may be ethically defensible, but Schiltz here simply asserts that the duty to science overrides concerns for academic freedom. Simply asserting that one duty trumps another does a good job of communicating where someone stands on the issue. However, it provides absolutely no support for their position.

Insofar as Plan S is designed on the basis of an undefended assertion that our collective duty to the science system as a whole outweighs our right as individuals to academic freedom, Plan S impinges on academic freedom. In doing so, Plan S violates an ethical norm of academia. Therefore, Plan S, as written, is unethical.

Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy

The tracking of the use of research has become central to the measurement of research impact. While historically this tracking has meant using citations to published papers, the results are old, biased, and inaccessible – and stakeholders need current data to make funding decisions. We can do much better. Today’s users of research interact with that research online. This leaves an unprecedented data trail that can provide detailed data on the attention that specific research outputs, institutions, or domains receive.

However, while the promise of real time information is tantalizing, the collection of this data is outstripping our knowledge of how best to use it, our understanding of its utility across differing research domains and our ability to address the privacy and confidentiality issues. This is particularly true in the field of Humanities and Social Sciences, which have historically been under represented in the collection of scientific corpora of citations, and which are now under represented by the tools and analysis approaches being developed to track the use and attention received by STM research outputs.

We will convene a meeting that combines a discussion of the state of the art in one way in which research impact can be measured – Article Level and Altmetrics – with a critical analysis of current gaps and identification of ways to address them in the context of Humanities and Social Sciences.

Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy.

Reflections on the 2014 Carey Lecture at the AAAS Forum on S&T Policy

Cherry A. Murray delivered the Carey Lecture last night at this year’s AAAS Forum on S&T Policy. I want to address one aspect of her talk here — the question of transdisciplinarity (TD, which I will also use for the adjective ‘transdisciplinary’) and its necessity to address the ‘big’ questions facing us.

As far as I could tell, Murray was working with her own definitions of disciplinary (D), multidisciplinary (MD), interdisciplinary (ID), and TD. In brief, according to Murray, D refers to single-discipline approaches to a problem, ID refers to two disciplines working together on the same problem, MD refers to more than two disciplines focused on the same problem from their own disciplinary perspectives, and TD refers to more than two disciplines working together on the same problem. Murray also used the term cross-disciplinary, which she did not define (to my recollection).

All these definitions are cogent. But do we really need a different term for two disciplines working on a problem together (ID) and more than two disciplines working on a problem together (TD)? Wouldn’t it be simpler just to use ID for more than one discipline?

I grant that there is no universally agreed upon definition of these terms (D, MD, ID, and TD). But basically no one who writes about these issues uses the definitions Murray proposed. And there is something like a rough consensus on what these terms mean, despite the lack of universal agreement. I discuss this consensus, and what these definitions mean for the issue of communication (and, by extension, cooperation) between and among disciplines here:10.1007/s11229-012-0179-7

I tend to agree that TD is a better approach to solving complex problems. But in saying this, I mean more than involving more than two disciplines. I mean involving non-academic, and hence non-disciplinary, actors in the process. It’s actually closer to the sort of design thinking that Bob Schwartz discussed in the second Science + Art session yesterday afternoon.

One might ask whether this discussion of terms is a distraction from Murray’s main point — that we need to think about solutions to the ‘big problems’ we face. I concede the point. But that is all the more reason to get our terms right, or at least to co-construct a new language for talking about what sort of cooperation is needed. There is a literature out there on ID/TD, and Murray failed to engage it. To point out that failure is not to make a disciplinary criticism of Murray (as if there might be a discipline of ID/TD, a topic I discuss here). It is to suggest, however, that inventing new terms on one’s own is not conducive to the sort of communication necessary to tackle the ‘big’ questions.

AAAS Forum on Science and Technology Policy

Those not in attendance can follow along on Twitter using the hashtag #AAASforum.

Ahead of the Curve // John J. Reilly Center // University of Notre Dame

Ahead Of The Curve: Anticipating Ethical, Legal, and Societal Issues Posed by Emerging Weapons Technologies

April 22-23, 2014

University of Notre Dame

“Ahead of the Curve” will provide a forum to discuss the “action-oriented” chapters of the soon-to-be-released National Academy of Science’s report, “Emerging and Readily Available Technologies and National Security.” The report was commissioned by the Defense Advanced Research Projects Agency (DARPA) in order to begin a discussion about the conduct and applications of research on military technology as well as their unforseen and inadvertant consequences. Speakers will include members of the NAS committee that wrote the report, along with distinguished experts on the ethics, law, and social impacts of new weapons technologies and representatives of agencies and organizations that are home to cutting-edge weapons research. Presentations will address the ethical, legal, and societal issues that policy makers, researchers, and industries need to anticipate as new technologies arise, specifically in fields such as robotics, autonomous systems, prosthetics and human enhancement, cyber weapons, information warfare technologies, synthetic biology, and nanotechnology. Our primary goal is to help government agencies, institutions, and researchers grow the expertise necessary for early and continuing engagement with the ethical, legal, and societal implications of new weapons technologies as they are planned and developed. We also aim to generate a broad public audience for the NAS report, this being an area in which public education is necessary, as is elevating the level of factually well-informed, public discourse.

via Ahead of the Curve // John J. Reilly Center // University of Notre Dame.