The humanities do not need a replication drive

From the fact that a small portion of research in the humanities may be replicable, it does not follow that all research in the humanities ought to be replicable.

A post on the CWTS Leiden Blog, written with Bart Penders and Sarah de Rijcke.

If you’d like a PDF, you can visit Humanities Commons.

On Open Access, Academic Freedom, and Science Policy — A Reply to Suber

I have argued that Plan S, if we were to take the 10 principles as currently written as policy, would impinge on academic freedom. It’s interesting who dismisses this claim out of hand and who actually responds to my argument, even if they disagree with me. I think Peter Suber is a member of the latter camp, even if his responses have not been as long and involved as my exchanges with Stephen Curry.

In what follows, I try to reconstruct Suber’s position on academic freedom and Open Access (OA), to go a bit further in my various attempts at defining academic freedom, and to argue more fully, pace Suber, that Plan S does impinge on academic freedom.

This reply struck me as odd, given Suber’s position on whether university OA mandates impinge on academic freedom:

6) Open access mandates infringe academic freedom

This is true for gold open access but not for green. But if you believe that all open access is gold, then this myth follows as a lemma. Because only about one-third of peer-reviewed journals are open access, requiring researchers to submit new work to open access journals would severely limit their freedom to submit work to the journals of their choice. By contrast, green open access is compatible with publishing in non-open access journals, which means that green open access mandates can respect author freedom to publish where they please. That is why literally all university open access mandates are green, not gold. It’s also why the green/gold distinction is significant, not fussy, and why myths suppressing recognition of green open access are harmful, not merely false.

To be clear, Suber is arguing that the claim that OA mandates infringe academic freedom is a myth. The myth would hold as reality, for Suber, only if such policies mandated publication in Gold (journal) venues. Insofar as such policies mandate Green OA (depositing some version of a published work in an institutional repository), they “respect author freedom to publish where they please.” I think it’s safe to say that Suber — at least the Suber of five years ago — held that author freedom to publish where they please is an aspect of academic freedom.

Since Plan S does restrict options for authors regarding where they publish, it seems like Suber ought to conclude that Plan S impinges on academic freedom. But he doesn’t; so, there must be more to the story.

It’s possible Suber changed his mind that freedom to choose their publication venue is part of academic freedom. I don’t think this is the case, though. Here’s more of our Twitter exchange:

So, the relevant principles to which Suber referred me a week ago are as follows:

2.  Universities should not limit the freedom of faculty to submit their work to the journals of their choice.

2.1.  If it weren’t for Principle 2, universities could require faculty to submit their articles to OA journals rather than deposit them in an OA repository (a gold OA mandate rather than a green OA mandate).  But there aren’t yet enough OA journals; there aren’t yet first-rate OA journals in every research niche; and even one day when there are, a university policy to rule out submission to a journal based solely on its business model would needlessly limit faculty freedom.  Not even the urgent need for OA justifies that kind of restriction, as long as we can achieve OA through OA repositories.  That’s why all university and funder OA mandates focus on green OA (through OA repositories) rather than gold OA (through OA journals).

But of course OA journals still deserve support.  See Principle 3.

2.2.  If annotation 2.1 doesn’t stand on its own, it may be because it presupposes another premise.  As I put it elsewhere:  “The purpose of the campaign for OA is the constructive one of providing OA to a larger and larger body of literature, not the destructive one of putting non-OA journals or publishers out of business. The consequences may or may not overlap (this is contingent), but the purposes do not overlap.”

2.3.  If it weren’t for Principle 2, universities could require faculty to deposit some version of their peer-reviewed journal articles in the IR, for OA, with or without an embargo, and faculty would have to avoid journals that did not allow OA archiving on those terms.  But that would needlessly limit faculty freedom to submit to the journals of their choice.  To respect faculty freedom, universities must allow exemptions (waivers, opt-outs) for faculty submitting to journals that do not allow OA archiving on the university’s terms.  However, when enough universities adopt OA mandates, then all journals would have to accommodate them, and therefore the first type of policy (no opt-outs) would no longer limit faculty freedom or violate Principle 2.  But until we approach that point, Principle 2 requires the second type of policy (with opt-outs).  Moreover, allowing an opt-out on OA is compatible with not allowing an opt-out on IR deposits themselves.  See the Appendix for more detail.

2.4.  The strategy to require OA archiving, and to require researchers to avoid publishers that will not allow it, was pioneered by the Wellcome Trust.  The WT’s example has been followed by some other funding agencies, most notably the UK Medical Research Council and the US National Institutes of Health.  Because I support these policies, as well as annotation 2.3, I should therefore point out that Principle 2 is designed for universities, not funding agencies.  Funding agencies are essentially charities, spending money on research because it is in the public interest.  They have an interest in making that research as useful and widely available as possible, and virtually no competing interests.  Universities have the same charitable purpose but many competing interests, such as nurturing researchers more than research projects, nurturing them over their entire careers, and erecting bulwarks of policy and custom to protect academic freedom.

2.5.  If we hasten the day when all or most journals allow postprint archiving, then we hasten the day when universities could adopt no-opt-out OA policies (as opposed to both no-opt-out deposit policies and opt-out OA policies) without violating Principle 2.  One way to do that is for universities to demand the right for postprint archiving when negotiating licensing terms for subscription or renewal.  OhioLink publicly committed itself to this strategy in 2006, the only library consortium I know to do so.  (OhioLink is a consortium of 86 academic libraries in Ohio representing more than 600,000 faculty, students, and staff.)  Several major universities are also trying this strategy, but so far without a public announcement.  Public or private, I recommend that all universities do what they can to negotiate better terms for their authors, not just better terms for their readers.

I included all of the annotations of principle 2, since I think all are relevant to this discussion. But one thing is clear: the Peter Suber of 10 years ago (author of the principles), 5 years ago (the mythbuster), and one week ago (on Twitter) agree that restricting a researcher’s choice of publication venue would impinge on academic freedom.

So, why would Plan S not impinge on academic freedom, according to Suber? There are actually two answers. One is contained in 2.4, above: “Principle 2 is designed for universities, not funding agencies.” I will get into the details of Suber’s argument for separating universities and funding agencies in a moment. But the bottom line is that Suber here allows funding agencies to adopt policies that would, were those policies adopted by universities, impinge on academic freedom.

Second, Suber holds that although the freedom to choose venue of publication should not be restricted by university policies on pain of impinging on academic freedom, freedom to choose venue of publication is not all there is to academic freedom.

So, although Plan S impinges on researchers’ free choice of publication venue, it does not infringe on what Suber calls the heart of academic freedom: to pursue the truth in teaching and research free from reprisals other than disagreement among academic peers.

I agreed with Suber on Twitter that, if we limit academic freedom to this “heart” definition, then Plan S does not impinge upon it. But what is the argument for limiting our definition of academic freedom in this way? Or does Suber want to say that Plan S infringes on academic freedom, but only at the margins (not the heart)? If the former, I still want to see an argument. If the latter, then why should universities be restricted from infringing on the ‘marginal’ freedom of researchers to choose the venue where they submit manuscripts for publication?

Suber offers a few reasons for thinking that his Principle 2 should apply to universities but not to funding agencies. The first is the claim that, “Funding agencies are essentially charities, spending money on research because it is in the public interest.” Because they are charities, Suber holds, “They have an interest in making that research as useful and widely available as possible, and virtually no competing interests.” Universities, on the other hand, “have the same charitable purpose but many competing interests, such as nurturing researchers more than research projects, nurturing them over their entire careers, and erecting bulwarks of policy and custom to protect academic freedom.” This is a really interesting comparison; but I’m not convinced.

I’m especially not convinced that universities and funding agencies have the same goals, but that the university has ‘competing interests’ that call into question its commitment to those goals. Maybe the idea here is that universities are more invested in individual researchers and therefore need to erect barriers against societal interference? So, perhaps Suber holds that funding agencies are freer to pursue the goal of benefiting society, since agencies don’t have to worry as much about individual researchers? Although I grant that funding agencies and universities play different roles in knowledge production, I don’t think it’s quite right to suggest that they both charitably aimed at the public good, but that universities also have to prioritize the special interests of their faculty, which sometimes gets in the way of their charitable aims.

But let’s run with the idea for a moment and suppose that, where funding agencies are charitable organizations, universities are philanthropic organizations. The difference is that charities provide somewhat more immediate — and one-off — relief for particular individuals suffering from particular problems, while philanthropic organizations take a longer view and try to address underlying issues and attend to systemic causes of problems. Where charities offer tactical interventions to alleviate instances of suffering (providing food or clothing, say, like the Salvation Army), philanthropic organizations are strategic, aiming to eliminate suffering from a particular cause tout court (ending poverty, like the Gates Foundation).

On this interpretation, funding agencies provide grants to benefit individual researchers or research teams, but universities nurture the research enterprise as a whole.

As intriguing as such a comparison might be, it breaks down in various ways. First, charitable organizations don’t give their charity in the expectation of an immediate return on their investment. They might give you food to alleviate your hunger; but they don’t then turn around and say, OK, now that you’ve had your breakfast, what are you going to do for society? Funding agencies do, however, often ask researchers for returns on their investment. I think that’s justified. But that’s because public research funding agencies are not charities. There’s a real difference between the US Department of Agriculture’s Food Distribution Programs and its National Institute of Food and Agriculture. The latter funds research on food, while the former distributes food. Since it’s obvious that public money used for research might be used for other purposes, there is a necessity for public funding agencies to make a case for their benefits to society (aka ‘broader impacts’). I agree that OA policies should be part of that case; but not in ways that hinder rather than empower researchers.

Second, it’s far from clear that funding agencies don’t also try to nurture researchers, sometimes over their whole careers, and that they don’t try to erect “bulwarks of policy and custom” to protect academic freedom. Many researchers experience continuous, career-long support for their research. Agencies even try to nurture early career researchers in their efforts to establish themselves. And there is no greater bulwark of policy than the process of peer review of grant proposals (Holbrook 2017, Baldwin 2018).

I grant that there is a difference in the roles played by funding agencies and universities; but the difference between them is not one that supports the idea that funding agencies can infringe on academic freedom in ways universities cannot (Suber’s principle 2.4). If anything, funding agencies have to be more careful than universities about infringing on academic freedom.

Here I want to invoke something that Robert Post said, and which I think is vital to this conversation: “The most basic point about academic freedom is that I, as a professor, can only be judged by my peers.” This claim expresses the same intuition underlying my earlier appeals to academic norms as key to academic freedom, since it’s about us academics giving ourselves the law (it’s a matter of autonomy). Existing academic norms support the idea that academics should be able to choose the venues to which they submit manuscripts for publication.

For me, this is  the key difference between university OA policies and funder mandates. The former — like the policy at Harvard — have been voted on by the faculty. Insofar as it comes from the funders, rather than being voted on by the faculty at universities, Plan S would be an ‘outside’ imposition. If waivers to university policies are essential in order for the university to avoid infringing on academic freedom, they are even more necessary in the case of funding agency OA mandates. Suber’s Principle 2 should apply to funding agencies, as well.

Philosophy and Science Policy: A Report from the Field I

I’m actually going to give a series of reports from the field, including a chapter in a book on Field Philosophy that I’m revising now in light of editor/reviewer comments. In the chapter, I discuss our Comparative Assessment of Peer Review project. For a brief account of Field Philosophy, see the preprint of a manuscript I co-authored with Diana Hicks. That’s also being revised now.

Today, however, I will be focusing on more pressing current events having to do with Plan S. So, I will give a talk at the NJIT Department of Humanities Fall Colloquium Series to try to let my colleagues know what I’ve been up to recently. Here are the slides.

As a philosopher, my approach is not simply to offer an objective ‘report’. Instead, I will be offering an argument.

If we want to encourage academic flourishing, then we need new ways of evaluating academic research. We want to encourage academic flourishing. Therefore, we need new ways of evaluating academic research.

Of course, the argument also refers to my own activities. I want my department to understand and value my forays into the field of science policy. But that will mean revaluing the way I am currently evaluated (which is along fairly standard lines).

On the “Myth” of Academic Freedom

In a recent post of the F1000 Blog, Rebecca Lawrence suggests that academic freedom is more myth than reality:

Academic freedom?

Other criticisms [of Plan S] focus on possible effects from the point of view of researchers as authors (rather than as readers and users of research) and the so called ‘academic freedom’ restrictions. But current ‘academic freedoms’ are somewhat of a myth, because the existing entrenched system of deciding on funding, promotions and tenure depends more on where you publish, than on what you publish and how your work has value to others. Hence, authors have to try to publish their work in the small subset of journals that are most likely to help their careers.

This scramble to publish the ‘best’ results in the ‘best’ journals causes many problems, including the high cost of such a selective process in these ‘high-impact’ journals, the repeated cost (both actual and time cost) of multiple resubmissions trying to find the ‘right place’ for the publication in the journal hierarchy, and the high opportunity cost.  This, combined with the high proportion of TA journals and the highly problematic growth of hybrid journals not only significantly increases cost, but compromises the goal of universal OA to research results – one of the greatest treasures the society can have and should expect.

We believe that if Plan S is implemented with the strong mandate it currently suggests, it will be a major step towards the goal of universal OA to research results and can greatly reduce overall costs in the scholarly communication system – which will itself bring benefits to researchers as authors and as users of research and indeed increase academic freedom.

I agree that the focus on where we publish rather than what we publish is detrimental to academia in all sorts of ways. When it comes to judging fellow academics’ publication records, too many use the journal title (the linguistic proxy for its impact factor) as a sufficient indicator of the quality of the article. What we should do, instead, is actually read the article. We should also reward academics for publishing in venues that are most likely to reach and impact their intended audiences and for writing in ways that are clearly understandable to non-specialists, when those non-specialists are the intended audience. Instead, we are often too quick to dismiss such publications as non-rigorous.

However, that academics evaluate each other in very messed up ways doesn’t show that academic freedom is a myth. What it shows is that academics aren’t always as thoughtful as we should be about how we exercise our academic freedom.

You're doing it wrong

I’ve never suggested that academic freedom means anything goes (or that you get to publish wherever you want, regardless of what the peer reviewers and editors say). What it does mean, though, is that, to a very large extent, we academics give ourselves the rules under which we operate, at least in terms of research and teaching. Again, I am not suggesting that anything goes. We still have to answer to laws about nepotism, corruption, sexual harassment, or murder. We’re not supposed to speed when we drive, ride our bicycles on the sidewalk, or lie on our taxes. I’m not even suggesting we are very wise about the rules we impose on ourselves.

In fact, I agree with Rebecca that the ways we evaluate each other are riddled with errors. But academic freedom means we have autonomy — give ourselves the law — when it comes to teaching and research. This freedom also comes with responsibilities: we need to teach on the topic of the course, for instance, not spend class time campaigning for our favorite politicians; we shouldn’t plagiarize or fabricate data; I even think we have a duty to try to ensure that our research has an impact on society.

Public funding bodies can obviously place restrictions on us about how we spend those funds. Maybe we’re not allowed to use grant funds to buy alcohol or take our significant others with us on research trips. Public funding bodies can decline to fund our research proposals. Academic freedom doesn’t say I’m entitled to a grant or that I get to spend the money on whatever I want when I get one.

But for public funding bodies to say that I have to publish my research under a CC-BY or equivalent license would cross the line and impinge on academic freedom. Telling me where and how to publish is something I let other academics do, because that’s consistent with academic freedom. I don’t always agree with their decisions. But the decisions of other academics are decisions we academics have to live with — or find a way to change. I want academics to change the rules about how we evaluate each other. Although it seems perfectly reasonable for funding bodies to lay out the rules for determining who gets grants and how money can be spent, I don’t want funding bodies dictating the rules about how we evaluate each other as part of the academic reward system, decisions about promotion, and such. Mandating a CC-BY license crosses that line into heteronomy.

 

Tools for Serendipity: SHERPA/RoMEO

I really want to post a pre-print of my recently published article in the Journal of Responsible Innovation: “Designing Responsible Research and Innovation as a tool to encourage serendipity could enhance the broader societal impacts of research.” Here’s a link to the published version. One thing about this article that would be obvious if one were to compare the pre-print to the final published version is just how much the latter was improved by peer review and input from the journal editor.

Since I still don’t have an institutional repository at NJIT, I could post it at Humanities Commons. Before I do that, I want to make sure I don’t get sideways with Taylor and Francis. So, the prudent thing to do is to check with SHERPA/RoMEO to see what the journal policies are. The problem, however, is that SHERPA/RoMEO hasn’t yet ‘graded’ JRI, so they don’t tell me what the policies are. This is all sort of understandable, since JRI is still a relatively new journal. Searching an older journal put out by the same publisher, Social Epistemology, tells me that I could post both pre-prints and post-prints — that is, my version, but not the actual publisher’s PDF, of the article after it went through peer review — of articles I published there. So, maybe I could go ahead, assuming that Taylor and Francis policy is consistent across all their journals. Instead, I requested that SHERPA/RoMEO grade JRI.

I can wait a while to post the pre-print, and I want to gauge how long it takes to get a grade. I’m also waiting to find out how long it takes for JRI to show up in Scopus (their main ‘about‘ page says they are indexed in Scopus, but it hasn’t shown up in Scopus, yet).  I’ve also been told that NJIT is getting bepress soon.

All of these — Humanities Commons, SHERPA/RoMEO, bepress — are tools for serendipity in the sense in which I outline the term in this article. As soon as I can let everyone see it, I will!

 

 

Thoughts from the Public Philosophy Network 2018 Conference

First, I’ve been away from my own blog for far too long. My apologies. Second, no more ‘Press This’?! Ugh. So, here is a LINK to the full program of PPN 2018.

Most of these thoughts were generated during the workshop run by Paul Thompson on day 1 on ‘Evaluating Public Philosophy as Academic Scholarship’. This issue is important for everyone who would like to see public philosophy succeed; but it is vitally important for those of us on the tenure track, since not being able to evaluate public philosophy as academic scholarship often means that it is reduced to a ‘service’ activity. Service, of course, is seen as even less important than teaching, which is often seen as less important than research. This hierarchy may be altered at small liberal arts colleges or others that put special emphasis on teaching. Generally speaking, though, one’s research rules in tenure decisions. I’ve never heard, or even heard of, any advice along the lines of ‘Do more teaching and publish less’ or ‘make sure you get on more committees or peer review more journal manuscripts’. Whereas ‘Just publish more’ is something I hear frequently.

So, it’s vitally important to be able to evaluate public philosophy as academic scholarship.

I want to add that, although many of these ideas were not my own and came from group discussion, I am solely responsible for the way I put them here. I may mess up, but no one else should be blamed for my mistakes. What follows isn’t quite the ‘Survival Guide’ that Michael O’Rourke suggested developing. Instead, it is a list of things I (and perhaps others) would like to see coming from PPN. (This may change what PPN is, of course. Can a network that meets once in while provide these things?)

We need:

  1. A statement on the value of public philosophy as academic scholarship. [EDIT: The expression of this need came up at the workshop, but no one there mentioned that such a statement already exists HERE from the APA.  Thanks to Jonathan Ellis and Kelly Parker for help in finding it! Apologies to APA for my ignorance.]
  2. A list of scholarly journals that are public philosophy friendly (i.e., where one can submit and publish work that includes public philosophy). The list would need to be curated so that new journals can be added and old ones removed when they fit or don’t fit the bill.
  3. A list of tools for making the case for the value of public philosophy. I have in mind things like altmetrics (see HERE or HERE or HERE), but it could also include building capacity among a set of potential peers who could serve as reviewers for public philosophy scholarship.
  4. Of course, developing a cohort of peers will mean developing a set of community standards for what counts as good public philosophy. I wouldn’t want that imposed from above (somewhere?) and think this will arise naturally if we are able to foster the development of the community.
  5. Some sort of infrastructure for networking. It’s supposedly a network, right? Is there anywhere people can post profiles?
  6. A repository of documents related to promotion and tenure in public philosophy. Katie Plaisance described how she developed a memorandum of understanding detailing the fact that her remarkably collaborative work deserved full credit as research, despite the fact that she works in a field that seems to value sole-authorship to the detriment of collaborative research. Katie was awesome and said she would share that document with me. But what if she (or everyone) who did smart and cool things like this to help guarantee their ability to do public philosophy had a central repository where all these documents could be posted for everyone to view and use? What if departments that have good criteria for promotion and tenure — criteria that allow for or even encourage public philosophy as scholarship — could post them on such a repository as resources for others?
  7. Leadership! Developing and maintaining these (and no doubt others I’ve missed) resources will require leadership, and maybe even money.

I’d be interested in thoughts on this list, including things you think should be added to it.

On Rubrics

Faculty Development

This semester I’m attending a series of Faculty Development Workshops at NJIT designed to assist new faculty with such essentials as teaching, grant writing, publishing, and tenure & promotion.

I’m posting here now in hopes of getting some feedback on a couple of rubrics I developed after attending the second such workshop.

I’m having students give group presentations in my course on Sports, Technology, and Society, and I was searching for ways to help ensure that all members contributed to the group presentation, as well as to differentiate among varying degrees of contribution. Last Tuesday’s workshop focused on assessment, with some treatment of the use of rubrics for both formative and summative assessment. I did a bit more research on my own, and here’s what I’ve come up with.

First, I developed a two-pronged approach. I want to be able to grade the presentation as a whole, as well as each individual’s contribution to that presentation. I decided to make the group presentation grade worth 60% and the individual contribution grade worth 40% of the overall presentation grade.

Second, I developed the group presentation rubric. For this, I owe a debt to several of the rubrics posted by the Eberly Center at Carnegie Mellon University. I found the rubrics for the philosophy paper and the oral presentation particularly helpful. I am thinking about using this rubric both for formative evaluation (to show the students what I expect), as well as for summative evaluation (actually grading the presentations).

Third, I developed the individual peer assessment rubric. I would actually have the students anonymously fill out one of these for each of their fellow group members. For this rubric, I found a publication from the University of New South Wales to be quite helpful (especially Table 2).

I’d be quite interested in constructive feedback on this approach.

Publishers withdraw more than 120 gibberish papers : Nature News & Comment

Publishers withdraw more than 120 gibberish papers : Nature News & Comment.

Thanks to one of my students — Addison Amiri — for pointing out this piece by @Richvn.

What a difference a day makes: How social media is transforming scientific debate (with tweets) · deevybee · Storify

This is definitely worth a look, whether you’re into the idea of post-publication peer review or not.

What a difference a day makes: How social media is transforming scientific debate (with tweets) · deevybee · Storify.

Apparently NSF Grant Applicants Still Allergic To Broader Impacts

Pasco Phronesis

The Consortium of Social Science Associations held its Annual Colloquium on Social And Behavioral Sciences and Public Policy earlier this week.  Amongst the speakers was Acting National Science Foundation (NSF) Director Cora Marrett.* As part of her remarks, she addressed how the Foundation was implementing the Coburn Amendment, which added additional criteria to funding political science research projects through NSF.

The first batch of reviews subject to these new requirements tookplace in early 2013.  In addition to the usual criteria of intellectual merit and broader impacts, the reviewers looked at the ‘most meritorious’ proposals and examined how they contribute to economic development and/or national security.  For the reviews scheduled for early 2014, all three ‘criteria’ will be reviewed at once.

Since researchers don’t like to be told what to do, they aren’t happy.  But Marrett asserts through her remarks that this additional review will not really affect the…

View original post 183 more words