On the “Myth” of Academic Freedom

In a recent post of the F1000 Blog, Rebecca Lawrence suggests that academic freedom is more myth than reality:

Academic freedom?

Other criticisms [of Plan S] focus on possible effects from the point of view of researchers as authors (rather than as readers and users of research) and the so called ‘academic freedom’ restrictions. But current ‘academic freedoms’ are somewhat of a myth, because the existing entrenched system of deciding on funding, promotions and tenure depends more on where you publish, than on what you publish and how your work has value to others. Hence, authors have to try to publish their work in the small subset of journals that are most likely to help their careers.

This scramble to publish the ‘best’ results in the ‘best’ journals causes many problems, including the high cost of such a selective process in these ‘high-impact’ journals, the repeated cost (both actual and time cost) of multiple resubmissions trying to find the ‘right place’ for the publication in the journal hierarchy, and the high opportunity cost.  This, combined with the high proportion of TA journals and the highly problematic growth of hybrid journals not only significantly increases cost, but compromises the goal of universal OA to research results – one of the greatest treasures the society can have and should expect.

We believe that if Plan S is implemented with the strong mandate it currently suggests, it will be a major step towards the goal of universal OA to research results and can greatly reduce overall costs in the scholarly communication system – which will itself bring benefits to researchers as authors and as users of research and indeed increase academic freedom.

I agree that the focus on where we publish rather than what we publish is detrimental to academia in all sorts of ways. When it comes to judging fellow academics’ publication records, too many use the journal title (the linguistic proxy for its impact factor) as a sufficient indicator of the quality of the article. What we should do, instead, is actually read the article. We should also reward academics for publishing in venues that are most likely to reach and impact their intended audiences and for writing in ways that are clearly understandable to non-specialists, when those non-specialists are the intended audience. Instead, we are often too quick to dismiss such publications as non-rigorous.

However, that academics evaluate each other in very messed up ways doesn’t show that academic freedom is a myth. What it shows is that academics aren’t always as thoughtful as we should be about how we exercise our academic freedom.

You're doing it wrong

I’ve never suggested that academic freedom means anything goes (or that you get to publish wherever you want, regardless of what the peer reviewers and editors say). What it does mean, though, is that, to a very large extent, we academics give ourselves the rules under which we operate, at least in terms of research and teaching. Again, I am not suggesting that anything goes. We still have to answer to laws about nepotism, corruption, sexual harassment, or murder. We’re not supposed to speed when we drive, ride our bicycles on the sidewalk, or lie on our taxes. I’m not even suggesting we are very wise about the rules we impose on ourselves.

In fact, I agree with Rebecca that the ways we evaluate each other are riddled with errors. But academic freedom means we have autonomy — give ourselves the law — when it comes to teaching and research. This freedom also comes with responsibilities: we need to teach on the topic of the course, for instance, not spend class time campaigning for our favorite politicians; we shouldn’t plagiarize or fabricate data; I even think we have a duty to try to ensure that our research has an impact on society.

Public funding bodies can obviously place restrictions on us about how we spend those funds. Maybe we’re not allowed to use grant funds to buy alcohol or take our significant others with us on research trips. Public funding bodies can decline to fund our research proposals. Academic freedom doesn’t say I’m entitled to a grant or that I get to spend the money on whatever I want when I get one.

But for public funding bodies to say that I have to publish my research under a CC-BY or equivalent license would cross the line and impinge on academic freedom. Telling me where and how to publish is something I let other academics do, because that’s consistent with academic freedom. I don’t always agree with their decisions. But the decisions of other academics are decisions we academics have to live with — or find a way to change. I want academics to change the rules about how we evaluate each other. Although it seems perfectly reasonable for funding bodies to lay out the rules for determining who gets grants and how money can be spent, I don’t want funding bodies dictating the rules about how we evaluate each other as part of the academic reward system, decisions about promotion, and such. Mandating a CC-BY license crosses that line into heteronomy.

 

What’s ‘unethical’ about Plan S?

In a recent blog post, my co-authors and I refer to Plan S as ‘unethical’. Doing so has upset Marc Schiltz, President of Science Europe.

Schiltz claims that disagreeing with some, or even many, aspects of Plan S does not in itself justify calling Plan S ‘unethical’. I completely agree. To justify calling Plan S ‘unethical’ would require more than simply disagreeing with some aspect of Plan S.

What more would be required? Calling Plan S ‘unethical’ would require an argument that shows that Plan S has violated some sort of ethical norm or crossed some sort of ethical line. Insofar as Plan S impinges on academic freedom, it has done just that.

Academic freedom is a contentious topic in and of itself, but particularly so when engaging in discussions about Open Access (OA). Part of the reason for the heightened tension surrounding academic freedom and OA is the perception that for-profit publishers have appealed to academic freedom to pummel OA advocates, portraying them as invaders of academics’ territory and themselves as defenders of academic freedom. As a result, anyone who appeals to academic freedom in an OA discussion runs the risk of being dismissed by OA advocates as an enemy in league with the publishers.

It’s also the case that academic freedom means different things in different contexts. In some countries, such as the UK and Germany, academic freedom is written into laws. In the US, the AAUP is the main source people use to define academic freedom. I’m a philosopher and an ethicist, not a lawyer. I’m also an American working at an American university, so my own conception of academic freedom is influenced by — but not exactly the same as — the AAUP definition. In short, I approach academic freedom as expressing an ethical norm of academia, rather than in terms of a legal framework. No doubt there are good reasons for such laws in different contexts; but academic freedom would be a thing — an ethical thing — even if there were no laws about it.

I won’t rehash the whole argument from our original post here. I direct interested parties to the sections of the blog under the sub-heading, “The problem of violating academic freedom.” If I had it to do over again, I would suggest to my coauthors altering some of the language in that section; but the bottom line remains the same — Plan S violates academic freedom. Insofar as Plan S violates academic freedom, it violates an ethical norm of academia. Hence, Plan S is unethical.

This is not to say that OA is unethical or necessarily violates academic freedom. I have argued in the past that OA need not violate academic freedom. In the recent flurry of discussion of Plan S on Twitter, Peter Suber pointed me to the carefully crafted Harvard OA policy’s answer to the academic freedom question. That policy meticulously avoids violating academic freedom (and would therefore count, for me, as an ethical OA policy).

To say that Plan S is unethical is simply to say that some aspects of it violate academic freedom. Some are an easy fix. Take, for instance, Principle #1.

Authors retain copyright of their publication with no
restrictions. All publications must be published under
an open license, preferably the Creative Commons
Attribution Licence CC BY. In all cases, the license
applied should fulfil the requirements defined by the
Berlin Declaration;

The violation of academic freedom in Principle #1 is contained in the last clause: “In all cases, the license applied should fulfil [sic] the requirements defined by the Berlin Declaration.” Because the Berlin Declaration actually requires an equivalent of the CC-BY license, that clause totally undermines the “preferably” in the previous clause. If Plan S merely expressed a strong preference for CC-BY or the equivalent, but allowed researchers to choose from among more restrictive licenses on a case by case basis, Principle #1 would not violate academic freedom. The simple fix is to remove the last clause of Principle #1.

Other issues are less easily fixed. In particular, I have in mind Schiltz’s Preamble to Plan S. There, Schiltz argues as follows.

We recognise that researchers need to be given a maximum
of freedom to choose the proper venue for publishing
their results and that in some jurisdictions this freedom
may be covered by a legal or constitutional protection.
However, our collective duty of care is for the science system
as a whole, and researchers must realise that they are
doing a gross disservice to the institution of science if they
continue to report their outcomes in publications that will
be locked behind paywalls.

I won’t rehash here the same argument my co-authors and I put forth in our initial blog post. Instead, I have a couple of other things to say here about Schiltz’s position, as expressed in this quote.

First, I have absolutely no objection on academic freedom grounds to making all of my research freely available (gratis) and removing paywalls. I agree that researchers have a duty to make their work freely available, if possible. Insofar as Plan S allows researchers to retain their copyrights and enables gratis OA, it’s a good thing, even an enhancer of academic freedom. The sticking point is mandating a CC-BY or equivalent license, which unethically limits the freedom of academics to choose from a broad range of possible licenses (libre is not a single license, but a range of possible ones). Fix Principle #1, and this particular violation of academic freedom disappears.

Second, there’s a trickier issue concerning individual freedom and group obligations. I discussed the issue in greater detail here. But the crux of the matter is that Schiltz here displays a marked preference for the rights of the group (or even of the impersonal “science system as a whole”) over the rights of individual members of the group. That position may be ethically defensible, but Schiltz here simply asserts that the duty to science overrides concerns for academic freedom. Simply asserting that one duty trumps another does a good job of communicating where someone stands on the issue. However, it provides absolutely no support for their position.

Insofar as Plan S is designed on the basis of an undefended assertion that our collective duty to the science system as a whole outweighs our right as individuals to academic freedom, Plan S impinges on academic freedom. In doing so, Plan S violates an ethical norm of academia. Therefore, Plan S, as written, is unethical.

Thoughts from the Public Philosophy Network 2018 Conference

First, I’ve been away from my own blog for far too long. My apologies. Second, no more ‘Press This’?! Ugh. So, here is a LINK to the full program of PPN 2018.

Most of these thoughts were generated during the workshop run by Paul Thompson on day 1 on ‘Evaluating Public Philosophy as Academic Scholarship’. This issue is important for everyone who would like to see public philosophy succeed; but it is vitally important for those of us on the tenure track, since not being able to evaluate public philosophy as academic scholarship often means that it is reduced to a ‘service’ activity. Service, of course, is seen as even less important than teaching, which is often seen as less important than research. This hierarchy may be altered at small liberal arts colleges or others that put special emphasis on teaching. Generally speaking, though, one’s research rules in tenure decisions. I’ve never heard, or even heard of, any advice along the lines of ‘Do more teaching and publish less’ or ‘make sure you get on more committees or peer review more journal manuscripts’. Whereas ‘Just publish more’ is something I hear frequently.

So, it’s vitally important to be able to evaluate public philosophy as academic scholarship.

I want to add that, although many of these ideas were not my own and came from group discussion, I am solely responsible for the way I put them here. I may mess up, but no one else should be blamed for my mistakes. What follows isn’t quite the ‘Survival Guide’ that Michael O’Rourke suggested developing. Instead, it is a list of things I (and perhaps others) would like to see coming from PPN. (This may change what PPN is, of course. Can a network that meets once in while provide these things?)

We need:

  1. A statement on the value of public philosophy as academic scholarship. [EDIT: The expression of this need came up at the workshop, but no one there mentioned that such a statement already exists HERE from the APA.  Thanks to Jonathan Ellis and Kelly Parker for help in finding it! Apologies to APA for my ignorance.]
  2. A list of scholarly journals that are public philosophy friendly (i.e., where one can submit and publish work that includes public philosophy). The list would need to be curated so that new journals can be added and old ones removed when they fit or don’t fit the bill.
  3. A list of tools for making the case for the value of public philosophy. I have in mind things like altmetrics (see HERE or HERE or HERE), but it could also include building capacity among a set of potential peers who could serve as reviewers for public philosophy scholarship.
  4. Of course, developing a cohort of peers will mean developing a set of community standards for what counts as good public philosophy. I wouldn’t want that imposed from above (somewhere?) and think this will arise naturally if we are able to foster the development of the community.
  5. Some sort of infrastructure for networking. It’s supposedly a network, right? Is there anywhere people can post profiles?
  6. A repository of documents related to promotion and tenure in public philosophy. Katie Plaisance described how she developed a memorandum of understanding detailing the fact that her remarkably collaborative work deserved full credit as research, despite the fact that she works in a field that seems to value sole-authorship to the detriment of collaborative research. Katie was awesome and said she would share that document with me. But what if she (or everyone) who did smart and cool things like this to help guarantee their ability to do public philosophy had a central repository where all these documents could be posted for everyone to view and use? What if departments that have good criteria for promotion and tenure — criteria that allow for or even encourage public philosophy as scholarship — could post them on such a repository as resources for others?
  7. Leadership! Developing and maintaining these (and no doubt others I’ve missed) resources will require leadership, and maybe even money.

I’d be interested in thoughts on this list, including things you think should be added to it.

Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy

The tracking of the use of research has become central to the measurement of research impact. While historically this tracking has meant using citations to published papers, the results are old, biased, and inaccessible – and stakeholders need current data to make funding decisions. We can do much better. Today’s users of research interact with that research online. This leaves an unprecedented data trail that can provide detailed data on the attention that specific research outputs, institutions, or domains receive.

However, while the promise of real time information is tantalizing, the collection of this data is outstripping our knowledge of how best to use it, our understanding of its utility across differing research domains and our ability to address the privacy and confidentiality issues. This is particularly true in the field of Humanities and Social Sciences, which have historically been under represented in the collection of scientific corpora of citations, and which are now under represented by the tools and analysis approaches being developed to track the use and attention received by STM research outputs.

We will convene a meeting that combines a discussion of the state of the art in one way in which research impact can be measured – Article Level and Altmetrics – with a critical analysis of current gaps and identification of ways to address them in the context of Humanities and Social Sciences.

Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy.

Publishers withdraw more than 120 gibberish papers : Nature News & Comment

Publishers withdraw more than 120 gibberish papers : Nature News & Comment.

Thanks to one of my students — Addison Amiri — for pointing out this piece by @Richvn.

Feature: The REF – how was it for you? | Features | Times Higher Education

Feature: The REF – how was it for you? | Features | Times Higher Education.

How journals like Nature, Cell and Science are damaging science | Randy Schekman | Comment is free | The Guardian

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called \”impact factor\” – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.

via How journals like Nature, Cell and Science are damaging science | Randy Schekman | Comment is free | The Guardian.

Thanks to my colleague Diana Hicks for pointing this out to me.

The last line of the quotation strikes me as the most interesting point, one that deserves further development. The steering effect of metrics is well known (Weingart 2005). There’s growing resistance to the Journal Impact Factor. Although the persuasive comparison between researchers and bankers is itself over the top, the last line suggests — at least to me — a better way to critique the reliance on the Journal Impact Factor, as well as other attempts to measure research. It’s a sort of reverse Kant with an Illichian flavor, which I will formulate as a principle here, provided that everyone promises to keep in mind my attitude toward principles.

Here is one formulation of the principle: Measure researchers only in ways that recognize them as autonomous agents, never merely as means to other ends.

Here is another: Never treat measures as ends in themselves.

Once measures, which are instruments to the core, take on a life of their own, we have crossed the line that Illich calls the second watershed. That the Journal Impact Factor has in fact crossed that line is the claim made in the quote, above, though not using Illich’s language. The question we should be asking is how researchers can manage measures, rather than how we can measure researchers in order to manage them.
_______________________________________________________

Peter Weingart. Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics Vol. 62, No. 1 (2005) 117-131.