I haven’t watched this, yet; but I feel it’s important to put it here to provide context.
I haven’t watched this, yet; but I feel it’s important to put it here to provide context.
Today, Stephen Curry published a piece on his blog on “Academic freedom and responsibility: why Plan S is not unethical,” and I want to offer a response to some of his arguments here.
The first thing to say is that I think Curry and I agree on quite a few points. We especially agree that to speak of academic freedom means we should also to speak of academic responsibility. For six years (2012-2018), I was a member of the American Association for the Advancement of Science (AAAS) Committee on Scientific Freedom and Responsibility. I fully support the AAAS Statement on Scientific Freedom and Responsibility, which the Committee co-authored:
Scientific freedom and scientific responsibility are essential to the advancement of human knowledge for the benefit of all. Scientific freedom is the freedom to engage in scientific inquiry, pursue and apply knowledge, and communicate openly. This freedom is inextricably linked to and must be exercised in accordance with scientific responsibility. Scientific responsibility is the duty to conduct and apply science with integrity, in the interest of humanity, in a spirit of stewardship for the environment, and with respect for human rights.
In particular, the Statement clearly expresses the key point that freedom and responsibility are inextricably linked, such that freedom must be exercised in accordance with responsibility. The same applies to academic freedom more generally. I am very much opposed to the idea that, under the rubric of academic freedom, anything goes!
Another point of agreement with Curry is that Plan S presents an opportunity for us to discuss our academic freedom and responsibility. I thank him for not simply dismissing concerns about academic freedom and for engaging in conversation! I only wish more of our fellow academics would be so willing to engage.
I think we also agree that the main point of disagreement between us is how best to balance our academic freedom and academic responsibility. I think it would be fair to say that, prima facie, I favor limiting academic freedom less than Curry does; or, perhaps, that he favors a more restrictive scope for academic freedom than I do; or maybe that he would draw the line between freedom and responsibility differently from how I would. So, the locus of the discussion is here.
Now, to turn to the details of Curry’s post. Based on conversations on Twitter today, I don’t think it’s necessary to spend too much time on this first point; but we can always revisit it, if I’m wrong. Curry’s initial argument against the academic freedom arguments made by my colleagues and me is that they rest on shaky foundations. In particular, Curry hones in on the claim that the freedom to publish in venues of our choice is fundamental to academic freedom, writing:
If we are to properly debate the question of whether choice of publication venue is a “basic tenet” of academic freedom, we need an evidence base of some sort.
Noting that we failed to provide a citation for this claim, Curry seeks evidence in various statements on academic freedom. He finds some evidence in a publisher’s statement, but then notes that a publisher has a vested interest. He finds no evidence in the Wikipedia entry on academic freedom. He finds some evidence in the AAUP 1940 Statement on Academic Freedom, which calls for “full freedom in research and in the publication of results;” but he argues that this call is vague and does not specifically mention freedom of venue in which to publish. (As for the bit about “pecuniary return,” I’m pretty sure that applies to patents or to publications that might produce royalties — so, if I do some research to make money, not if I get a grant, from which I don’t profit financially.) Curry then proceeds to search for evidence in a 1997 UNESCO statement on academic freedom that says: “Higher-education teaching personnel should be free to publish the results of research and scholarship in books, journals and databases of their own choice.” Curry then goes on to examine the UNESCO document more fully and concludes (his emphasis):
The preamble and principles put a clear emphasis on academic freedom as a freedom from undue political interference in the questions that academics may ask and write about, and it is this concern that seems uppermost in their minds when they write about the freedom to publish.
This is important, since the discussion today on Twitter between Curry and Richard Poynder turned on precisely whether UNESCO had a negative view of academic freedom (freedom from interference, as Curry argued) or a positive view (freedom to publish in venue of choice, as Poynder argued). I discuss this distinction between negative and positive views on academic freedom in greater detail here; but I think this difference underlies a lot of the disagreements about the topic.
The bottom line of Curry’s attempts to find “evidence” for the claim that the right to choose where to publish is fundamental to academic freedom is that he could not find any that would provide unequivocal support. For that reason, he concludes that the claim rests on shaky foundations.
My response may sound odd, since it is a claim I was a party to that is under attack. But I hold that, even if all the sources Curry explored agreed explicitly with the claim that choice of publication venue is vital to academic freedom, that would not provide unequivocal support for the claim. UNESCO’s recommendations don’t have the force of international law; AAUP cannot impose its definition on anyone; Wikipedia is good, but it’s not that good; and, yes, the publishers have a vested interest.
However, that these organizations don’t provide unequivocal support for the claim doesn’t show that the claim rests on shaky foundations; these sources were never meant to serve as foundations for the claim. As I said in an earlier post,
Academic freedom would be a thing — an ethical thing — even if there were no laws about it.
So, the first point of disagreement between Curry and me concerns what would constitute evidence for the claim that choice of venue of publication is fundamental to academic freedom. I think the fact that academics normally expect to be able to choose the venue of publication for their research supports the claim that choice of venue is a fundamental aspect of academic freedom better than any of these definitions examined by Curry (even had they provided unequivocal statements in favor of choice of venue). Michael J. Barany tweeted something today that I haven’t had a chance to read, yet, that may force me to reexamine this claim:
Until I do read it, though, I think that the academic norms with which I’m familiar with regard to venue of publication support the claim that being able to submit a manuscript to the venue of your choice is a normal expectation for an academic. Moreover, I think that academic norms provide at least prima facie support for claims about academic freedom in general. Can academic norms be questioned? Of course! I’ve argued for years now that academics receiving public funding for their research have a duty to the public to try to ensure that their research will have broader societal impacts. For quite a while, academics wanted to insist they didn’t have such a duty. So, my approach was to try to engage academics in a discussion on the topic. This, I take it, is also the approach Curry is taking with regard to Plan S.
So, based on my knowledge (which is, of course, limited) of academic norms, I would say academics have the following expectations that all fall under the rubric of academic freedom:
This isn’t an exhaustive list, but it’s one I think most academics would look at and say, yeah, that’s about right.
Note that the list already includes some limitations that would also fall under academic responsibility (to the department and to the university). I think there are others, including:
I expect less agreement on this list of responsibilities. But I do agree with Curry that discussing our freedoms and responsibilities is a really good way to continue the discussion.
In a recent post of the F1000 Blog, Rebecca Lawrence suggests that academic freedom is more myth than reality:
Academic freedom?
Other criticisms [of Plan S] focus on possible effects from the point of view of researchers as authors (rather than as readers and users of research) and the so called ‘academic freedom’ restrictions. But current ‘academic freedoms’ are somewhat of a myth, because the existing entrenched system of deciding on funding, promotions and tenure depends more on where you publish, than on what you publish and how your work has value to others. Hence, authors have to try to publish their work in the small subset of journals that are most likely to help their careers.
This scramble to publish the ‘best’ results in the ‘best’ journals causes many problems, including the high cost of such a selective process in these ‘high-impact’ journals, the repeated cost (both actual and time cost) of multiple resubmissions trying to find the ‘right place’ for the publication in the journal hierarchy, and the high opportunity cost. This, combined with the high proportion of TA journals and the highly problematic growth of hybrid journals not only significantly increases cost, but compromises the goal of universal OA to research results – one of the greatest treasures the society can have and should expect.
We believe that if Plan S is implemented with the strong mandate it currently suggests, it will be a major step towards the goal of universal OA to research results and can greatly reduce overall costs in the scholarly communication system – which will itself bring benefits to researchers as authors and as users of research and indeed increase academic freedom.
I agree that the focus on where we publish rather than what we publish is detrimental to academia in all sorts of ways. When it comes to judging fellow academics’ publication records, too many use the journal title (the linguistic proxy for its impact factor) as a sufficient indicator of the quality of the article. What we should do, instead, is actually read the article. We should also reward academics for publishing in venues that are most likely to reach and impact their intended audiences and for writing in ways that are clearly understandable to non-specialists, when those non-specialists are the intended audience. Instead, we are often too quick to dismiss such publications as non-rigorous.
However, that academics evaluate each other in very messed up ways doesn’t show that academic freedom is a myth. What it shows is that academics aren’t always as thoughtful as we should be about how we exercise our academic freedom.
I’ve never suggested that academic freedom means anything goes (or that you get to publish wherever you want, regardless of what the peer reviewers and editors say). What it does mean, though, is that, to a very large extent, we academics give ourselves the rules under which we operate, at least in terms of research and teaching. Again, I am not suggesting that anything goes. We still have to answer to laws about nepotism, corruption, sexual harassment, or murder. We’re not supposed to speed when we drive, ride our bicycles on the sidewalk, or lie on our taxes. I’m not even suggesting we are very wise about the rules we impose on ourselves.
In fact, I agree with Rebecca that the ways we evaluate each other are riddled with errors. But academic freedom means we have autonomy — give ourselves the law — when it comes to teaching and research. This freedom also comes with responsibilities: we need to teach on the topic of the course, for instance, not spend class time campaigning for our favorite politicians; we shouldn’t plagiarize or fabricate data; I even think we have a duty to try to ensure that our research has an impact on society.
Public funding bodies can obviously place restrictions on us about how we spend those funds. Maybe we’re not allowed to use grant funds to buy alcohol or take our significant others with us on research trips. Public funding bodies can decline to fund our research proposals. Academic freedom doesn’t say I’m entitled to a grant or that I get to spend the money on whatever I want when I get one.
But for public funding bodies to say that I have to publish my research under a CC-BY or equivalent license would cross the line and impinge on academic freedom. Telling me where and how to publish is something I let other academics do, because that’s consistent with academic freedom. I don’t always agree with their decisions. But the decisions of other academics are decisions we academics have to live with — or find a way to change. I want academics to change the rules about how we evaluate each other. Although it seems perfectly reasonable for funding bodies to lay out the rules for determining who gets grants and how money can be spent, I don’t want funding bodies dictating the rules about how we evaluate each other as part of the academic reward system, decisions about promotion, and such. Mandating a CC-BY license crosses that line into heteronomy.
In a recent blog post, my co-authors and I refer to Plan S as ‘unethical’. Doing so has upset Marc Schiltz, President of Science Europe.
Schiltz claims that disagreeing with some, or even many, aspects of Plan S does not in itself justify calling Plan S ‘unethical’. I completely agree. To justify calling Plan S ‘unethical’ would require more than simply disagreeing with some aspect of Plan S.
What more would be required? Calling Plan S ‘unethical’ would require an argument that shows that Plan S has violated some sort of ethical norm or crossed some sort of ethical line. Insofar as Plan S impinges on academic freedom, it has done just that.
Academic freedom is a contentious topic in and of itself, but particularly so when engaging in discussions about Open Access (OA). Part of the reason for the heightened tension surrounding academic freedom and OA is the perception that for-profit publishers have appealed to academic freedom to pummel OA advocates, portraying them as invaders of academics’ territory and themselves as defenders of academic freedom. As a result, anyone who appeals to academic freedom in an OA discussion runs the risk of being dismissed by OA advocates as an enemy in league with the publishers.
It’s also the case that academic freedom means different things in different contexts. In some countries, such as the UK and Germany, academic freedom is written into laws. In the US, the AAUP is the main source people use to define academic freedom. I’m a philosopher and an ethicist, not a lawyer. I’m also an American working at an American university, so my own conception of academic freedom is influenced by — but not exactly the same as — the AAUP definition. In short, I approach academic freedom as expressing an ethical norm of academia, rather than in terms of a legal framework. No doubt there are good reasons for such laws in different contexts; but academic freedom would be a thing — an ethical thing — even if there were no laws about it.
I won’t rehash the whole argument from our original post here. I direct interested parties to the sections of the blog under the sub-heading, “The problem of violating academic freedom.” If I had it to do over again, I would suggest to my coauthors altering some of the language in that section; but the bottom line remains the same — Plan S violates academic freedom. Insofar as Plan S violates academic freedom, it violates an ethical norm of academia. Hence, Plan S is unethical.
This is not to say that OA is unethical or necessarily violates academic freedom. I have argued in the past that OA need not violate academic freedom. In the recent flurry of discussion of Plan S on Twitter, Peter Suber pointed me to the carefully crafted Harvard OA policy’s answer to the academic freedom question. That policy meticulously avoids violating academic freedom (and would therefore count, for me, as an ethical OA policy).
To say that Plan S is unethical is simply to say that some aspects of it violate academic freedom. Some are an easy fix. Take, for instance, Principle #1.
Authors retain copyright of their publication with no
restrictions. All publications must be published under
an open license, preferably the Creative Commons
Attribution Licence CC BY. In all cases, the license
applied should fulfil the requirements defined by the
Berlin Declaration;
The violation of academic freedom in Principle #1 is contained in the last clause: “In all cases, the license applied should fulfil [sic] the requirements defined by the Berlin Declaration.” Because the Berlin Declaration actually requires an equivalent of the CC-BY license, that clause totally undermines the “preferably” in the previous clause. If Plan S merely expressed a strong preference for CC-BY or the equivalent, but allowed researchers to choose from among more restrictive licenses on a case by case basis, Principle #1 would not violate academic freedom. The simple fix is to remove the last clause of Principle #1.
Other issues are less easily fixed. In particular, I have in mind Schiltz’s Preamble to Plan S. There, Schiltz argues as follows.
We recognise that researchers need to be given a maximum
of freedom to choose the proper venue for publishing
their results and that in some jurisdictions this freedom
may be covered by a legal or constitutional protection.
However, our collective duty of care is for the science system
as a whole, and researchers must realise that they are
doing a gross disservice to the institution of science if they
continue to report their outcomes in publications that will
be locked behind paywalls.
I won’t rehash here the same argument my co-authors and I put forth in our initial blog post. Instead, I have a couple of other things to say here about Schiltz’s position, as expressed in this quote.
First, I have absolutely no objection on academic freedom grounds to making all of my research freely available (gratis) and removing paywalls. I agree that researchers have a duty to make their work freely available, if possible. Insofar as Plan S allows researchers to retain their copyrights and enables gratis OA, it’s a good thing, even an enhancer of academic freedom. The sticking point is mandating a CC-BY or equivalent license, which unethically limits the freedom of academics to choose from a broad range of possible licenses (libre is not a single license, but a range of possible ones). Fix Principle #1, and this particular violation of academic freedom disappears.
Second, there’s a trickier issue concerning individual freedom and group obligations. I discussed the issue in greater detail here. But the crux of the matter is that Schiltz here displays a marked preference for the rights of the group (or even of the impersonal “science system as a whole”) over the rights of individual members of the group. That position may be ethically defensible, but Schiltz here simply asserts that the duty to science overrides concerns for academic freedom. Simply asserting that one duty trumps another does a good job of communicating where someone stands on the issue. However, it provides absolutely no support for their position.
Insofar as Plan S is designed on the basis of an undefended assertion that our collective duty to the science system as a whole outweighs our right as individuals to academic freedom, Plan S impinges on academic freedom. In doing so, Plan S violates an ethical norm of academia. Therefore, Plan S, as written, is unethical.
This looks like fun!
I really want to post a pre-print of my recently published article in the Journal of Responsible Innovation: “Designing Responsible Research and Innovation as a tool to encourage serendipity could enhance the broader societal impacts of research.” Here’s a link to the published version. One thing about this article that would be obvious if one were to compare the pre-print to the final published version is just how much the latter was improved by peer review and input from the journal editor.
Since I still don’t have an institutional repository at NJIT, I could post it at Humanities Commons. Before I do that, I want to make sure I don’t get sideways with Taylor and Francis. So, the prudent thing to do is to check with SHERPA/RoMEO to see what the journal policies are. The problem, however, is that SHERPA/RoMEO hasn’t yet ‘graded’ JRI, so they don’t tell me what the policies are. This is all sort of understandable, since JRI is still a relatively new journal. Searching an older journal put out by the same publisher, Social Epistemology, tells me that I could post both pre-prints and post-prints — that is, my version, but not the actual publisher’s PDF, of the article after it went through peer review — of articles I published there. So, maybe I could go ahead, assuming that Taylor and Francis policy is consistent across all their journals. Instead, I requested that SHERPA/RoMEO grade JRI.
I can wait a while to post the pre-print, and I want to gauge how long it takes to get a grade. I’m also waiting to find out how long it takes for JRI to show up in Scopus (their main ‘about‘ page says they are indexed in Scopus, but it hasn’t shown up in Scopus, yet). I’ve also been told that NJIT is getting bepress soon.
All of these — Humanities Commons, SHERPA/RoMEO, bepress — are tools for serendipity in the sense in which I outline the term in this article. As soon as I can let everyone see it, I will!
This semester I’m attending a series of Faculty Development Workshops at NJIT designed to assist new faculty with such essentials as teaching, grant writing, publishing, and tenure & promotion.
I’m posting here now in hopes of getting some feedback on a couple of rubrics I developed after attending the second such workshop.
I’m having students give group presentations in my course on Sports, Technology, and Society, and I was searching for ways to help ensure that all members contributed to the group presentation, as well as to differentiate among varying degrees of contribution. Last Tuesday’s workshop focused on assessment, with some treatment of the use of rubrics for both formative and summative assessment. I did a bit more research on my own, and here’s what I’ve come up with.
First, I developed a two-pronged approach. I want to be able to grade the presentation as a whole, as well as each individual’s contribution to that presentation. I decided to make the group presentation grade worth 60% and the individual contribution grade worth 40% of the overall presentation grade.
Second, I developed the group presentation rubric. For this, I owe a debt to several of the rubrics posted by the Eberly Center at Carnegie Mellon University. I found the rubrics for the philosophy paper and the oral presentation particularly helpful. I am thinking about using this rubric both for formative evaluation (to show the students what I expect), as well as for summative evaluation (actually grading the presentations).
Third, I developed the individual peer assessment rubric. I would actually have the students anonymously fill out one of these for each of their fellow group members. For this rubric, I found a publication from the University of New South Wales to be quite helpful (especially Table 2).
I’d be quite interested in constructive feedback on this approach.
The tracking of the use of research has become central to the measurement of research impact. While historically this tracking has meant using citations to published papers, the results are old, biased, and inaccessible – and stakeholders need current data to make funding decisions. We can do much better. Today’s users of research interact with that research online. This leaves an unprecedented data trail that can provide detailed data on the attention that specific research outputs, institutions, or domains receive.
However, while the promise of real time information is tantalizing, the collection of this data is outstripping our knowledge of how best to use it, our understanding of its utility across differing research domains and our ability to address the privacy and confidentiality issues. This is particularly true in the field of Humanities and Social Sciences, which have historically been under represented in the collection of scientific corpora of citations, and which are now under represented by the tools and analysis approaches being developed to track the use and attention received by STM research outputs.
We will convene a meeting that combines a discussion of the state of the art in one way in which research impact can be measured – Article Level and Altmetrics – with a critical analysis of current gaps and identification of ways to address them in the context of Humanities and Social Sciences.
Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy.
Cherry A. Murray delivered the Carey Lecture last night at this year’s AAAS Forum on S&T Policy. I want to address one aspect of her talk here — the question of transdisciplinarity (TD, which I will also use for the adjective ‘transdisciplinary’) and its necessity to address the ‘big’ questions facing us.
As far as I could tell, Murray was working with her own definitions of disciplinary (D), multidisciplinary (MD), interdisciplinary (ID), and TD. In brief, according to Murray, D refers to single-discipline approaches to a problem, ID refers to two disciplines working together on the same problem, MD refers to more than two disciplines focused on the same problem from their own disciplinary perspectives, and TD refers to more than two disciplines working together on the same problem. Murray also used the term cross-disciplinary, which she did not define (to my recollection).
All these definitions are cogent. But do we really need a different term for two disciplines working on a problem together (ID) and more than two disciplines working on a problem together (TD)? Wouldn’t it be simpler just to use ID for more than one discipline?
I grant that there is no universally agreed upon definition of these terms (D, MD, ID, and TD). But basically no one who writes about these issues uses the definitions Murray proposed. And there is something like a rough consensus on what these terms mean, despite the lack of universal agreement. I discuss this consensus, and what these definitions mean for the issue of communication (and, by extension, cooperation) between and among disciplines here:10.1007/s11229-012-0179-7.
I tend to agree that TD is a better approach to solving complex problems. But in saying this, I mean more than involving more than two disciplines. I mean involving non-academic, and hence non-disciplinary, actors in the process. It’s actually closer to the sort of design thinking that Bob Schwartz discussed in the second Science + Art session yesterday afternoon.
One might ask whether this discussion of terms is a distraction from Murray’s main point — that we need to think about solutions to the ‘big problems’ we face. I concede the point. But that is all the more reason to get our terms right, or at least to co-construct a new language for talking about what sort of cooperation is needed. There is a literature out there on ID/TD, and Murray failed to engage it. To point out that failure is not to make a disciplinary criticism of Murray (as if there might be a discipline of ID/TD, a topic I discuss here). It is to suggest, however, that inventing new terms on one’s own is not conducive to the sort of communication necessary to tackle the ‘big’ questions.
Wow.
We would like to announce that Altmetric have begun tracking mentions of academic articles on Chinese microblogging site Sina Weibo, and the data will shortly be fully integrated into existing Altmetric tools.
The mentions collated will be visible to users via the Altmetric Explorer, a web-based application that allows users to browse the online mentions of any academic article, and, where appropriately licensed, via the article metrics data on publisher platforms.
Launched in 2009, Sina Weibo has become one of the largest social media sites in China, and is most often likened to Twitter. Integrating this data means that Altmetric users will now be able to see a much more global view of the attention an article has received. Altmetric is currently the only article level metrics provider to offer this data.
via Altmetric Begin Tracking Mentions Of Articles On Sina Weibo | STM Publishing.
A community blog and repository of resources for improving research impact on complex real-world problems
Science | Policy | Advice | Engagement
Only the unmeasured is free.
Tracking retractions as a window into the scientific process
Media, Politics, Reform
Explorations in contemplative writing
When @richvn feels like it
From Bauhaus to Beinhaus
SV-POW! ... All sauropod vertebrae, except when we're talking about Open Access
Home for research news from my lab and posts about related science.
research education, academic writing, public engagement, funding, other eccentricities.
Paul Wouters and Sarah de Rijcke @ CWTS
technology thinking for teaching and research
Something always escapes!
Exploring Science, Explaining Evolution, Exposing Creationism
out of the archives and into the streets
Exploring Knowledge as a Social Phenomenon