On the “Myth” of Academic Freedom

In a recent post of the F1000 Blog, Rebecca Lawrence suggests that academic freedom is more myth than reality:

Academic freedom?

Other criticisms [of Plan S] focus on possible effects from the point of view of researchers as authors (rather than as readers and users of research) and the so called ‘academic freedom’ restrictions. But current ‘academic freedoms’ are somewhat of a myth, because the existing entrenched system of deciding on funding, promotions and tenure depends more on where you publish, than on what you publish and how your work has value to others. Hence, authors have to try to publish their work in the small subset of journals that are most likely to help their careers.

This scramble to publish the ‘best’ results in the ‘best’ journals causes many problems, including the high cost of such a selective process in these ‘high-impact’ journals, the repeated cost (both actual and time cost) of multiple resubmissions trying to find the ‘right place’ for the publication in the journal hierarchy, and the high opportunity cost.  This, combined with the high proportion of TA journals and the highly problematic growth of hybrid journals not only significantly increases cost, but compromises the goal of universal OA to research results – one of the greatest treasures the society can have and should expect.

We believe that if Plan S is implemented with the strong mandate it currently suggests, it will be a major step towards the goal of universal OA to research results and can greatly reduce overall costs in the scholarly communication system – which will itself bring benefits to researchers as authors and as users of research and indeed increase academic freedom.

I agree that the focus on where we publish rather than what we publish is detrimental to academia in all sorts of ways. When it comes to judging fellow academics’ publication records, too many use the journal title (the linguistic proxy for its impact factor) as a sufficient indicator of the quality of the article. What we should do, instead, is actually read the article. We should also reward academics for publishing in venues that are most likely to reach and impact their intended audiences and for writing in ways that are clearly understandable to non-specialists, when those non-specialists are the intended audience. Instead, we are often too quick to dismiss such publications as non-rigorous.

However, that academics evaluate each other in very messed up ways doesn’t show that academic freedom is a myth. What it shows is that academics aren’t always as thoughtful as we should be about how we exercise our academic freedom.

You're doing it wrong

I’ve never suggested that academic freedom means anything goes (or that you get to publish wherever you want, regardless of what the peer reviewers and editors say). What it does mean, though, is that, to a very large extent, we academics give ourselves the rules under which we operate, at least in terms of research and teaching. Again, I am not suggesting that anything goes. We still have to answer to laws about nepotism, corruption, sexual harassment, or murder. We’re not supposed to speed when we drive, ride our bicycles on the sidewalk, or lie on our taxes. I’m not even suggesting we are very wise about the rules we impose on ourselves.

In fact, I agree with Rebecca that the ways we evaluate each other are riddled with errors. But academic freedom means we have autonomy — give ourselves the law — when it comes to teaching and research. This freedom also comes with responsibilities: we need to teach on the topic of the course, for instance, not spend class time campaigning for our favorite politicians; we shouldn’t plagiarize or fabricate data; I even think we have a duty to try to ensure that our research has an impact on society.

Public funding bodies can obviously place restrictions on us about how we spend those funds. Maybe we’re not allowed to use grant funds to buy alcohol or take our significant others with us on research trips. Public funding bodies can decline to fund our research proposals. Academic freedom doesn’t say I’m entitled to a grant or that I get to spend the money on whatever I want when I get one.

But for public funding bodies to say that I have to publish my research under a CC-BY or equivalent license would cross the line and impinge on academic freedom. Telling me where and how to publish is something I let other academics do, because that’s consistent with academic freedom. I don’t always agree with their decisions. But the decisions of other academics are decisions we academics have to live with — or find a way to change. I want academics to change the rules about how we evaluate each other. Although it seems perfectly reasonable for funding bodies to lay out the rules for determining who gets grants and how money can be spent, I don’t want funding bodies dictating the rules about how we evaluate each other as part of the academic reward system, decisions about promotion, and such. Mandating a CC-BY license crosses that line into heteronomy.

 

Camp Engineering Education AfterNext

This looks like fun!

Tools for Serendipity: SHERPA/RoMEO

I really want to post a pre-print of my recently published article in the Journal of Responsible Innovation: “Designing Responsible Research and Innovation as a tool to encourage serendipity could enhance the broader societal impacts of research.” Here’s a link to the published version. One thing about this article that would be obvious if one were to compare the pre-print to the final published version is just how much the latter was improved by peer review and input from the journal editor.

Since I still don’t have an institutional repository at NJIT, I could post it at Humanities Commons. Before I do that, I want to make sure I don’t get sideways with Taylor and Francis. So, the prudent thing to do is to check with SHERPA/RoMEO to see what the journal policies are. The problem, however, is that SHERPA/RoMEO hasn’t yet ‘graded’ JRI, so they don’t tell me what the policies are. This is all sort of understandable, since JRI is still a relatively new journal. Searching an older journal put out by the same publisher, Social Epistemology, tells me that I could post both pre-prints and post-prints — that is, my version, but not the actual publisher’s PDF, of the article after it went through peer review — of articles I published there. So, maybe I could go ahead, assuming that Taylor and Francis policy is consistent across all their journals. Instead, I requested that SHERPA/RoMEO grade JRI.

I can wait a while to post the pre-print, and I want to gauge how long it takes to get a grade. I’m also waiting to find out how long it takes for JRI to show up in Scopus (their main ‘about‘ page says they are indexed in Scopus, but it hasn’t shown up in Scopus, yet).  I’ve also been told that NJIT is getting bepress soon.

All of these — Humanities Commons, SHERPA/RoMEO, bepress — are tools for serendipity in the sense in which I outline the term in this article. As soon as I can let everyone see it, I will!

 

 

Thoughts from the Public Philosophy Network 2018 Conference

First, I’ve been away from my own blog for far too long. My apologies. Second, no more ‘Press This’?! Ugh. So, here is a LINK to the full program of PPN 2018.

Most of these thoughts were generated during the workshop run by Paul Thompson on day 1 on ‘Evaluating Public Philosophy as Academic Scholarship’. This issue is important for everyone who would like to see public philosophy succeed; but it is vitally important for those of us on the tenure track, since not being able to evaluate public philosophy as academic scholarship often means that it is reduced to a ‘service’ activity. Service, of course, is seen as even less important than teaching, which is often seen as less important than research. This hierarchy may be altered at small liberal arts colleges or others that put special emphasis on teaching. Generally speaking, though, one’s research rules in tenure decisions. I’ve never heard, or even heard of, any advice along the lines of ‘Do more teaching and publish less’ or ‘make sure you get on more committees or peer review more journal manuscripts’. Whereas ‘Just publish more’ is something I hear frequently.

So, it’s vitally important to be able to evaluate public philosophy as academic scholarship.

I want to add that, although many of these ideas were not my own and came from group discussion, I am solely responsible for the way I put them here. I may mess up, but no one else should be blamed for my mistakes. What follows isn’t quite the ‘Survival Guide’ that Michael O’Rourke suggested developing. Instead, it is a list of things I (and perhaps others) would like to see coming from PPN. (This may change what PPN is, of course. Can a network that meets once in while provide these things?)

We need:

  1. A statement on the value of public philosophy as academic scholarship. [EDIT: The expression of this need came up at the workshop, but no one there mentioned that such a statement already exists HERE from the APA.  Thanks to Jonathan Ellis and Kelly Parker for help in finding it! Apologies to APA for my ignorance.]
  2. A list of scholarly journals that are public philosophy friendly (i.e., where one can submit and publish work that includes public philosophy). The list would need to be curated so that new journals can be added and old ones removed when they fit or don’t fit the bill.
  3. A list of tools for making the case for the value of public philosophy. I have in mind things like altmetrics (see HERE or HERE or HERE), but it could also include building capacity among a set of potential peers who could serve as reviewers for public philosophy scholarship.
  4. Of course, developing a cohort of peers will mean developing a set of community standards for what counts as good public philosophy. I wouldn’t want that imposed from above (somewhere?) and think this will arise naturally if we are able to foster the development of the community.
  5. Some sort of infrastructure for networking. It’s supposedly a network, right? Is there anywhere people can post profiles?
  6. A repository of documents related to promotion and tenure in public philosophy. Katie Plaisance described how she developed a memorandum of understanding detailing the fact that her remarkably collaborative work deserved full credit as research, despite the fact that she works in a field that seems to value sole-authorship to the detriment of collaborative research. Katie was awesome and said she would share that document with me. But what if she (or everyone) who did smart and cool things like this to help guarantee their ability to do public philosophy had a central repository where all these documents could be posted for everyone to view and use? What if departments that have good criteria for promotion and tenure — criteria that allow for or even encourage public philosophy as scholarship — could post them on such a repository as resources for others?
  7. Leadership! Developing and maintaining these (and no doubt others I’ve missed) resources will require leadership, and maybe even money.

I’d be interested in thoughts on this list, including things you think should be added to it.

Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy

The tracking of the use of research has become central to the measurement of research impact. While historically this tracking has meant using citations to published papers, the results are old, biased, and inaccessible – and stakeholders need current data to make funding decisions. We can do much better. Today’s users of research interact with that research online. This leaves an unprecedented data trail that can provide detailed data on the attention that specific research outputs, institutions, or domains receive.

However, while the promise of real time information is tantalizing, the collection of this data is outstripping our knowledge of how best to use it, our understanding of its utility across differing research domains and our ability to address the privacy and confidentiality issues. This is particularly true in the field of Humanities and Social Sciences, which have historically been under represented in the collection of scientific corpora of citations, and which are now under represented by the tools and analysis approaches being developed to track the use and attention received by STM research outputs.

We will convene a meeting that combines a discussion of the state of the art in one way in which research impact can be measured – Article Level and Altmetrics – with a critical analysis of current gaps and identification of ways to address them in the context of Humanities and Social Sciences.

Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy.

Altmetric.com Tracking Mentions On Sina Weibo | STM Publishing

Wow.

We would like to announce that Altmetric have begun tracking mentions of academic articles on Chinese microblogging site Sina Weibo, and the data will shortly be fully integrated into existing Altmetric tools.

The mentions collated will be visible to users via the Altmetric Explorer, a web-based application that allows users to browse the online mentions of any academic article, and, where appropriately licensed, via the article metrics data on publisher platforms.

Launched in 2009, Sina Weibo has become one of the largest social media sites in China, and is most often likened to Twitter. Integrating this data means that Altmetric users will now be able to see a much more global view of the attention an article has received. Altmetric is currently the only article level metrics provider to offer this data.

via Altmetric Begin Tracking Mentions Of Articles On Sina Weibo | STM Publishing.

Measuring the Impacts of Science | AAAS Forum on Science and Technology Policy

I’m looking forward to moderating a panel on day 1 of the AAAS Forum on Science and Technology Policy.

2:00 Current Issues in S&T Policy (Breakout Sessions) 
 
(A) Measuring the Impacts of Science   
• What are the policy relevant challenges, tools, and approaches to measuring the social impact of scientific research? • How can improved indicators capture change in science, technology, and innovation? • Are altmetrics the solution to measuring social impacts? 
  
Moderator: J. Britt Holbrook, Visiting Assistant Professor, School of Public Policy, Georgia Institute of Technology; and Member, AAAS Committee on Scientific Freedom and Responsibility
 
Kaye Husbands Fealing, Professor, Center for Science, Technology and Environmental Policy, Humphrey School of Public Affairs, University of Minnesota; Senior Study Director, National Academy of Sciences, Committee on National Statistics; and Member, AAAS Committee on Science, Engineering, and Public Policy
 
Gil Omenn, Director, Center for Computational Medicine and Bioinformatics, University of Michigan
 
Mike Taylor, Research Specialist, Elsevier Labs