Philosophy and Science Policy: A Report from the Field I

I’m actually going to give a series of reports from the field, including a chapter in a book on Field Philosophy that I’m revising now in light of editor/reviewer comments. In the chapter, I discuss our Comparative Assessment of Peer Review project. For a brief account of Field Philosophy, see the preprint of a manuscript I co-authored with Diana Hicks. That’s also being revised now.

Today, however, I will be focusing on more pressing current events having to do with Plan S. So, I will give a talk at the NJIT Department of Humanities Fall Colloquium Series to try to let my colleagues know what I’ve been up to recently. Here are the slides.

As a philosopher, my approach is not simply to offer an objective ‘report’. Instead, I will be offering an argument.

If we want to encourage academic flourishing, then we need new ways of evaluating academic research. We want to encourage academic flourishing. Therefore, we need new ways of evaluating academic research.

Of course, the argument also refers to my own activities. I want my department to understand and value my forays into the field of science policy. But that will mean revaluing the way I am currently evaluated (which is along fairly standard lines).

Camp Engineering Education AfterNext

This looks like fun!

Reflections on the 2014 Carey Lecture at the AAAS Forum on S&T Policy

Cherry A. Murray delivered the Carey Lecture last night at this year’s AAAS Forum on S&T Policy. I want to address one aspect of her talk here — the question of transdisciplinarity (TD, which I will also use for the adjective ‘transdisciplinary’) and its necessity to address the ‘big’ questions facing us.

As far as I could tell, Murray was working with her own definitions of disciplinary (D), multidisciplinary (MD), interdisciplinary (ID), and TD. In brief, according to Murray, D refers to single-discipline approaches to a problem, ID refers to two disciplines working together on the same problem, MD refers to more than two disciplines focused on the same problem from their own disciplinary perspectives, and TD refers to more than two disciplines working together on the same problem. Murray also used the term cross-disciplinary, which she did not define (to my recollection).

All these definitions are cogent. But do we really need a different term for two disciplines working on a problem together (ID) and more than two disciplines working on a problem together (TD)? Wouldn’t it be simpler just to use ID for more than one discipline?

I grant that there is no universally agreed upon definition of these terms (D, MD, ID, and TD). But basically no one who writes about these issues uses the definitions Murray proposed. And there is something like a rough consensus on what these terms mean, despite the lack of universal agreement. I discuss this consensus, and what these definitions mean for the issue of communication (and, by extension, cooperation) between and among disciplines here:10.1007/s11229-012-0179-7

I tend to agree that TD is a better approach to solving complex problems. But in saying this, I mean more than involving more than two disciplines. I mean involving non-academic, and hence non-disciplinary, actors in the process. It’s actually closer to the sort of design thinking that Bob Schwartz discussed in the second Science + Art session yesterday afternoon.

One might ask whether this discussion of terms is a distraction from Murray’s main point — that we need to think about solutions to the ‘big problems’ we face. I concede the point. But that is all the more reason to get our terms right, or at least to co-construct a new language for talking about what sort of cooperation is needed. There is a literature out there on ID/TD, and Murray failed to engage it. To point out that failure is not to make a disciplinary criticism of Murray (as if there might be a discipline of ID/TD, a topic I discuss here). It is to suggest, however, that inventing new terms on one’s own is not conducive to the sort of communication necessary to tackle the ‘big’ questions.

PLOS Biology: Expert Failure: Re-evaluating Research Assessment

Do what you can today; help disrupt and redesign the scientific norms around how we assess, search, and filter science.

via PLOS Biology: Expert Failure: Re-evaluating Research Assessment.

You know, I’m generally in favor of this idea — at least of the idea that we ought to redesign our assessment of research (science in the broad sense). But, as one might expect when speaking of design, the devil is in the details. It would be disastrous, for instance, to throw the baby of peer review out with the bathwater of bias.

I touch on the issue of bias in peer review in this article (coauthored with Steven Hrotic). I suggest that attacks on peer review are attacks on one of the biggest safeguards of academic autonomy here (coauthored with Robert Frodeman). On the relation between peer review and the values of autonomy and accountability, see: J. Britt Holbrook (2010). “Peer Review,” in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press: 321-32 and J. Britt Holbrook (2012). “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan. Beijing: People’s Publishing House: 328 – 62. 

Developing Metrics for the Evaluation of Individual Researchers – Should Bibliometricians Be Left to Their Own Devices?

So, I am sorry to have missed most of the Atlanta Conference on Science and Innovation Policy. On the other hand, I wouldn’t trade my involvement with the AAAS Committee on Scientific Freedom and Responsibility for any other academic opportunity. I love the CSFR meetings, and I think we may even be able to make a difference occasionally. I always leave the meetings energized and thinking about what I can do next.

That said, I am really happy to be on my way back to the ATL to participate in the last day of the Atlanta Conference. Ismael Rafols asked me to participate in a roundtable discussion with Cassidy Sugimoto and him (to be chaired by Diana Hicks). Like I’d say ‘no’ to that invitation!

The topic will be the recent discussions among bibliometricians of the development of metrics for individual researchers. That sounds like a great conversation to me! Of course, when I indicated to Ismael that I was bascially against the idea of bibliometricians coming up with standards for individual-level metrics, Ismael laughed and said the conversation should be interesting.

I’m not going to present a paper; just some thoughts. But I did start writing on the plane. Here’s what I have so far:

Bibliometrics are now increasingly being used in ways that go beyond their design. Bibliometricians are now increasingly asking how they should react to such unintended uses of the tools they developed. The issue of unintended consequences – especially of technologies designed with one purpose in mind, but which can be repurposed – is not new, of course. And bibliometricians have been asking questions – ethical questions, but also policy questions – essentially since the beginning of the development of bibliometrics. If anyone is sensitive to the fact that numbers are not neutral, it is surely the bibliometricians.

This sensitivity to numbers, however, especially when combined with great technical skill and large data sets, can also be a weakness. Bibliometricians are also aware of this phenomenon, though perhaps to a lesser degree than one might like. There are exceptions. The discussion by Paul Wouters, Wolfgang Glänzel, Jochen Gläser, and Ismael Rafols regarding this “urgent debate in bibliometrics,” is one indication of such awareness. Recent sessions at ISSI in Vienna and STI2013 in Berlin on which Wouters et al. report are other indicators that the bibliometrics community feels a sense of urgency, especially with regard to the question of measuring the performance of individual researchers.

That such questions are being raised and discussed by bibliometricians is certainly a positive development. One cannot fault bibliometricians for wanting to take responsibility for the unintended consequences of their own inventions. But one – I would prefer to say ‘we’ – cannot allow this responsibility to be assumed only by members of the bibliometrics community.

It’s not so much that I don’t want to blame them for not having thought through possible other uses of their metrics — holding them to what Carl Mitcham calls a duty plus respicare: to take more into account than the purpose for which something was initially designed. It’s that I don’t want to leave it to them to try to fix things. Bibliometricians, after all, are a disciplinary community. They have standards; but I worry they also think their standards ought to be the standards. That’s the same sort of naivety that got us in this mess in the first place.

Look, if you’re going to invent a device someone else can command (deans and provosts with research evaluation metrics are like teenagers driving cars), you ought at least to have thought about how those others might use it in ways you didn’t intend. But since you didn’t, don’t try to come in now with your standards as if you know best.

Bibliometrics are not the province of bibliometricians anymore. They’re part of academe. And we academics need to take ownership of them. We shouldn’t let administrators drive in our neighborhoods without some sort of oversight. We should learn to drive ourselves so we can determine the rules of the road. If the bibliometricians want to help, that’s cool. But I am not going to let the Fordists figure out academe for me.

With the development of individual level bibliometrics, we now have the ability — and the interest — to own our own metrics. What we want to avoid at all costs is having metrics take over our world so that they end up steering us rather than us driving them. We don’t want what’s happened with the car to happen with bibliometrics. What we want is to stop at the level at which bibliometrics of individual researchers maximize the power and creativity of individual researchers. Once we standardize metrics, it makes it that much easier to institutionalize them.

It’s not metrics themselves that we academics should resist. ‘Impact’ is a great opportunity, if we own it. But by all means, we should resist the institutionalization of standardized metrics. A first step is to resist their standardization.

Coming soon …

Image

– Featuring nearly 200 entirely new entries

– All entries revised and updated

– Plus expanded coverage of engineering topics and global perspectives

– Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents

Evaluating Research beyond Scientific Impact How to Include Criteria for Productive Interactions and Impact on Practice and Society

New, Open Access article just published.

Authors: Wolf, Birge; Lindenthal, Thomas; Szerencsits, Manfred; Holbrook, J. Britt; Heß, Jürgen

Source: GAIA – Ecological Perspectives for Science and Society, Volume 22, Number 2, June 2013 , pp. 104-114(11)

Abstract:

Currently, established research evaluation focuses on scientific impact – that is, the impact of research on science itself. We discuss extending research evaluation to cover productive interactions and the impact of research on practice and society. The results are based on interviews with scientists from (organic) agriculture and a review of the literature on broader/social/societal impact assessment and the evaluation of interdisciplinary and transdisciplinary research. There is broad agreement about what activities and impacts of research are relevant for such an evaluation. However, the extension of research evaluation is hampered by a lack of easily usable data. To reduce the effort involved in data collection, the usability of existing documentation procedures (e.g., proposals and reports for research funding) needs to be increased. We propose a structured database for the evaluation of scientists, projects, programmes and institutions, one that will require little additional effort beyond existing reporting require ments.

Peer Evaluation : Evaluating Research beyond Scientific Impact How to Include Criteria for Productive Interactions and Impact on Practice and Society.