Philosophy and Science Policy: A Report from the Field I

I’m actually going to give a series of reports from the field, including a chapter in a book on Field Philosophy that I’m revising now in light of editor/reviewer comments. In the chapter, I discuss our Comparative Assessment of Peer Review project. For a brief account of Field Philosophy, see the preprint of a manuscript I co-authored with Diana Hicks. That’s also being revised now.

Today, however, I will be focusing on more pressing current events having to do with Plan S. So, I will give a talk at the NJIT Department of Humanities Fall Colloquium Series to try to let my colleagues know what I’ve been up to recently. Here are the slides.

As a philosopher, my approach is not simply to offer an objective ‘report’. Instead, I will be offering an argument.

If we want to encourage academic flourishing, then we need new ways of evaluating academic research. We want to encourage academic flourishing. Therefore, we need new ways of evaluating academic research.

Of course, the argument also refers to my own activities. I want my department to understand and value my forays into the field of science policy. But that will mean revaluing the way I am currently evaluated (which is along fairly standard lines).

Camp Engineering Education AfterNext

This looks like fun!

Reflections on the 2014 Carey Lecture at the AAAS Forum on S&T Policy

Cherry A. Murray delivered the Carey Lecture last night at this year’s AAAS Forum on S&T Policy. I want to address one aspect of her talk here — the question of transdisciplinarity (TD, which I will also use for the adjective ‘transdisciplinary’) and its necessity to address the ‘big’ questions facing us.

As far as I could tell, Murray was working with her own definitions of disciplinary (D), multidisciplinary (MD), interdisciplinary (ID), and TD. In brief, according to Murray, D refers to single-discipline approaches to a problem, ID refers to two disciplines working together on the same problem, MD refers to more than two disciplines focused on the same problem from their own disciplinary perspectives, and TD refers to more than two disciplines working together on the same problem. Murray also used the term cross-disciplinary, which she did not define (to my recollection).

All these definitions are cogent. But do we really need a different term for two disciplines working on a problem together (ID) and more than two disciplines working on a problem together (TD)? Wouldn’t it be simpler just to use ID for more than one discipline?

I grant that there is no universally agreed upon definition of these terms (D, MD, ID, and TD). But basically no one who writes about these issues uses the definitions Murray proposed. And there is something like a rough consensus on what these terms mean, despite the lack of universal agreement. I discuss this consensus, and what these definitions mean for the issue of communication (and, by extension, cooperation) between and among disciplines here:10.1007/s11229-012-0179-7

I tend to agree that TD is a better approach to solving complex problems. But in saying this, I mean more than involving more than two disciplines. I mean involving non-academic, and hence non-disciplinary, actors in the process. It’s actually closer to the sort of design thinking that Bob Schwartz discussed in the second Science + Art session yesterday afternoon.

One might ask whether this discussion of terms is a distraction from Murray’s main point — that we need to think about solutions to the ‘big problems’ we face. I concede the point. But that is all the more reason to get our terms right, or at least to co-construct a new language for talking about what sort of cooperation is needed. There is a literature out there on ID/TD, and Murray failed to engage it. To point out that failure is not to make a disciplinary criticism of Murray (as if there might be a discipline of ID/TD, a topic I discuss here). It is to suggest, however, that inventing new terms on one’s own is not conducive to the sort of communication necessary to tackle the ‘big’ questions.

PLOS Biology: Expert Failure: Re-evaluating Research Assessment

Do what you can today; help disrupt and redesign the scientific norms around how we assess, search, and filter science.

via PLOS Biology: Expert Failure: Re-evaluating Research Assessment.

You know, I’m generally in favor of this idea — at least of the idea that we ought to redesign our assessment of research (science in the broad sense). But, as one might expect when speaking of design, the devil is in the details. It would be disastrous, for instance, to throw the baby of peer review out with the bathwater of bias.

I touch on the issue of bias in peer review in this article (coauthored with Steven Hrotic). I suggest that attacks on peer review are attacks on one of the biggest safeguards of academic autonomy here (coauthored with Robert Frodeman). On the relation between peer review and the values of autonomy and accountability, see: J. Britt Holbrook (2010). “Peer Review,” in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press: 321-32 and J. Britt Holbrook (2012). “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan. Beijing: People’s Publishing House: 328 – 62. 

Developing Metrics for the Evaluation of Individual Researchers – Should Bibliometricians Be Left to Their Own Devices?

So, I am sorry to have missed most of the Atlanta Conference on Science and Innovation Policy. On the other hand, I wouldn’t trade my involvement with the AAAS Committee on Scientific Freedom and Responsibility for any other academic opportunity. I love the CSFR meetings, and I think we may even be able to make a difference occasionally. I always leave the meetings energized and thinking about what I can do next.

That said, I am really happy to be on my way back to the ATL to participate in the last day of the Atlanta Conference. Ismael Rafols asked me to participate in a roundtable discussion with Cassidy Sugimoto and him (to be chaired by Diana Hicks). Like I’d say ‘no’ to that invitation!

The topic will be the recent discussions among bibliometricians of the development of metrics for individual researchers. That sounds like a great conversation to me! Of course, when I indicated to Ismael that I was bascially against the idea of bibliometricians coming up with standards for individual-level metrics, Ismael laughed and said the conversation should be interesting.

I’m not going to present a paper; just some thoughts. But I did start writing on the plane. Here’s what I have so far:

Bibliometrics are now increasingly being used in ways that go beyond their design. Bibliometricians are now increasingly asking how they should react to such unintended uses of the tools they developed. The issue of unintended consequences – especially of technologies designed with one purpose in mind, but which can be repurposed – is not new, of course. And bibliometricians have been asking questions – ethical questions, but also policy questions – essentially since the beginning of the development of bibliometrics. If anyone is sensitive to the fact that numbers are not neutral, it is surely the bibliometricians.

This sensitivity to numbers, however, especially when combined with great technical skill and large data sets, can also be a weakness. Bibliometricians are also aware of this phenomenon, though perhaps to a lesser degree than one might like. There are exceptions. The discussion by Paul Wouters, Wolfgang Glänzel, Jochen Gläser, and Ismael Rafols regarding this “urgent debate in bibliometrics,” is one indication of such awareness. Recent sessions at ISSI in Vienna and STI2013 in Berlin on which Wouters et al. report are other indicators that the bibliometrics community feels a sense of urgency, especially with regard to the question of measuring the performance of individual researchers.

That such questions are being raised and discussed by bibliometricians is certainly a positive development. One cannot fault bibliometricians for wanting to take responsibility for the unintended consequences of their own inventions. But one – I would prefer to say ‘we’ – cannot allow this responsibility to be assumed only by members of the bibliometrics community.

It’s not so much that I don’t want to blame them for not having thought through possible other uses of their metrics — holding them to what Carl Mitcham calls a duty plus respicare: to take more into account than the purpose for which something was initially designed. It’s that I don’t want to leave it to them to try to fix things. Bibliometricians, after all, are a disciplinary community. They have standards; but I worry they also think their standards ought to be the standards. That’s the same sort of naivety that got us in this mess in the first place.

Look, if you’re going to invent a device someone else can command (deans and provosts with research evaluation metrics are like teenagers driving cars), you ought at least to have thought about how those others might use it in ways you didn’t intend. But since you didn’t, don’t try to come in now with your standards as if you know best.

Bibliometrics are not the province of bibliometricians anymore. They’re part of academe. And we academics need to take ownership of them. We shouldn’t let administrators drive in our neighborhoods without some sort of oversight. We should learn to drive ourselves so we can determine the rules of the road. If the bibliometricians want to help, that’s cool. But I am not going to let the Fordists figure out academe for me.

With the development of individual level bibliometrics, we now have the ability — and the interest — to own our own metrics. What we want to avoid at all costs is having metrics take over our world so that they end up steering us rather than us driving them. We don’t want what’s happened with the car to happen with bibliometrics. What we want is to stop at the level at which bibliometrics of individual researchers maximize the power and creativity of individual researchers. Once we standardize metrics, it makes it that much easier to institutionalize them.

It’s not metrics themselves that we academics should resist. ‘Impact’ is a great opportunity, if we own it. But by all means, we should resist the institutionalization of standardized metrics. A first step is to resist their standardization.

Coming soon …

Image

– Featuring nearly 200 entirely new entries

– All entries revised and updated

– Plus expanded coverage of engineering topics and global perspectives

– Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents

Evaluating Research beyond Scientific Impact How to Include Criteria for Productive Interactions and Impact on Practice and Society

New, Open Access article just published.

Authors: Wolf, Birge; Lindenthal, Thomas; Szerencsits, Manfred; Holbrook, J. Britt; Heß, Jürgen

Source: GAIA – Ecological Perspectives for Science and Society, Volume 22, Number 2, June 2013 , pp. 104-114(11)

Abstract:

Currently, established research evaluation focuses on scientific impact – that is, the impact of research on science itself. We discuss extending research evaluation to cover productive interactions and the impact of research on practice and society. The results are based on interviews with scientists from (organic) agriculture and a review of the literature on broader/social/societal impact assessment and the evaluation of interdisciplinary and transdisciplinary research. There is broad agreement about what activities and impacts of research are relevant for such an evaluation. However, the extension of research evaluation is hampered by a lack of easily usable data. To reduce the effort involved in data collection, the usability of existing documentation procedures (e.g., proposals and reports for research funding) needs to be increased. We propose a structured database for the evaluation of scientists, projects, programmes and institutions, one that will require little additional effort beyond existing reporting require ments.

Peer Evaluation : Evaluating Research beyond Scientific Impact How to Include Criteria for Productive Interactions and Impact on Practice and Society.

Broader Impacts and Intellectual Merit: Paradigm Shift? | NOT UNTIL YOU CITE US!

On the one hand, this post on the VCU website is very cool.  It contains some interesting observations and what I think is some good advice for researchers submitting and reviewing NSF proposals.

Broader Impacts and Intellectual Merit: Paradigm Shift? | CHS Sponsored Programs.

On the other hand, this post also illustrates how researchers’ broader impacts go unnoticed.

One of my main areas of research is peer review at S&T funding agencies, such as NSF. I especially focus on the incorporation of societal impact criteria, such as NSF’s Broader Impacts Merit Review Criterion. In fact, I published the first scholarly article on broader impacts in 2005. My colleagues at CSID and I have published more than anyone else on this topic. Most of our research was sponsored by NSF.

I don’t just perform research on broader impacts, though. I take the idea that scholarly research should have some impact on the world seriously, and I try to put it into practice. One of the things I try to do is reach out to scientists, engineers, and research development professionals in an effort to help them improve the attention to broader impacts in the proposals they are working to submit to NSF. This past May, for instance, I traveled down to Austin to give a presentation at the National Organization for Research Development Professionals Conference (NORDP 2013). You can see a PDF version of my presentation at figshare.

If you look at the slides, you may recognize a point I made in a previous post, today. That point is that ‘intellectual merit’ and ‘broader impact’ are simply different perspectives on research. I made this point at NORDP 2013, as well, as you can see from my slides. Notice how they put the point on the VCU site:

Broader Impacts are just another aspect of their research that needs to be communicated (as opposed to an additional thing that must be “tacked on”).

I couldn’t have said it better myself. Or perhaps I could. Or perhaps I did. At NORDP 2013.

Again, VCU says:

Presenters at both conferences [they refer to something called NCURA, with that hyperlink, and to NORDP, with no hyperlink] have encouraged faculty to take the new and improved criteria seriously, citing that Broader Impacts are designed to answer accountability demands.  If Broader Impacts are not carefully communicated so that they are clear to all (even non-scientific types!), a door could be opened for more prescriptive national research priorities in the future—a move that would limit what types of projects can receive federal funding, and would ultimately inhibit basic research.

Unless someone else is starting to sound a lot like us, THIS IS OUR MESSAGE!

My point is not to claim ownership over these ideas. If I were worried about intellectual property, I could trademark a broader impacts catch phrase or something. My point is that if researchers don’t get any credit for the broader impacts of their research, they’ll be disinclined to engage in activities that might have broader impacts. I’m happy to share these ideas. How else could I expect to have a broader impact? I’ll continue to share them, even without attribution. That’s part of the code.

To clarify: I’m not mad. In fact, I’m happy to see these ideas on the VCU site (or elsewhere …). But would it kill them to add a hyperlink or two? Or a name? Or something? I’d be really impressed if they added a link to this post.

I’m also claiming this as evidence of the broader impacts of my research. I don’t have to contact any lawyers for that, do I?

UPDATE: BRIGITTE PFISTER, AUTHOR OF THE POST TO WHICH I DIRECTED MY DIATRIBE, ABOVE, HAS RESPONDED HERE. I APPRECIATE THAT A LOT. I ALSO LEFT A COMMENT APOLOGIZING FOR MY TONE IN THE ABOVE POST. IT’S AWAITING MODERATION; BUT I HOPE IT’S ACCEPTED AS IT’S MEANT — AS AN APOLOGY AND AS A SIGN OF RESPECT.

Nigel Warburton’s negative vision of what philosophy isn’t

Philosopher Nigel Warburton, of philosophy bites fame, has just resigned his academic post at the Open University to pursue other opportunities. The Philosopher’s Magazine conducts an extended interview with Warburton here. Much of what he reveals in this interview is both entertaining and, in my opinion, true.

But one aspect of the interview especially caught my attention. After offering several criticisms of academic philosophy today with which I’m in total agreement (in particular the tendency of hiring committees to hire clones of themselves rather than enhancing the diversity of the department), Warburton offers what he seems to view as the ultimate take down of academic philosophy. I quote this section in full, below. If you’ve been paying any attention to this blog or our posts at CSID, you’ll understand why, immediately.

He reserves particular venom for the REF, the Research Excellence Framework, a system of expert review which assesses research undertaken in UK higher education, which is then used to allocate future rounds of funding. A lot of it turns on the importance of research having a social, economic or cultural impact. It’s not exactly the sort of thing that philosophical reflection on, say, the nature of being qua being is likely to have. He leans into my recorder to make sure I get every word:

“One of the most disturbing things about academic philosophy today is the way that so many supposed gadflies and rebels in philosophy have just rolled over in the face of the REF – particularly by going along with the idea of measuring and quantifying impact,” he says, making inverted commas with his fingers, “a technical notion which was constructed for completely different disciplines. I’m not even sure what research means in philosophy. Philosophers are struggling to find ways of describing what they do as having impact as defined by people who don’t seem to appreciate what sort of things they do. This is absurd. Why are you wasting your time? Why aren’t you standing up and saying philosophy’s not like that? To think that funding in higher education in philosophy is going to be determined partly by people’s creative writing about how they have impact with their work. Just by entering into this you’ve compromised yourself as a philosopher. It’s not the kind of thing that Socrates did or that Hume did or that John Locke did. Locke may have had patrons, but he seemed to write what he thought rather than kowtowing to forces which are pushing on to us a certain vision, a certain view of what philosophical activities should be. Why are you doing this? I’m getting out. For those of you left in, how can you call yourselves philosophers? This isn’t what philosophy’s about.”

Please tell us how you really feel, Dr. Warburton.

In the US, we are not subject to the REF. But we are subject to many, many managerial requirements, including, if we seek grant funding, the requirement that we account for the impact of our research. We are, of course, ‘free’ to opt out of this sort of requirement simply by not seeking grant funding. Universities in the UK, however, are not ‘free’ to opt out of the REF. So, are the only choices open to ‘real’ philosophers worthy of the name resistance or removing oneself from the university, as Warburton has chosen?

I think not. My colleagues and I recently published an article in which we present a positive vision of academic philosophy today. A key aspect of our position is that the question of impact is itself a philosophical, not merely a technical, problem. Philosophers, in particular, should own impact rather than allowing impact to be imposed on us by outside authorities. The question of impact is a case study in whether the sort of account of freedom as non-domination offered by Pettit can be instantiated in a policy context, in addition to posited in political philosophy.

Being able to see impact as a philosophical question rests on being able to question the idea that the only sort of freedom worth having is freedom from interference. If philosophy matters to more than isolated individuals — even if connected by social media — then we have to realize that any philosophically rich conception of liberty must also include responsibility to others. Our notion of autonomy need not be reduced to the sort of non-interference that can only be guaranteed by separation (of the university from society, as Humboldt advocated, or of the philosopher from the university, as Warburton now suggests). Autonomy must be linked to accountability — and we philosophers should be able to tackle this problem without being called out as non-philosophers by someone who has chosen to opt out of this struggle.

A call for the philosopher librarian

This is a reblog of something I originally posted here. Thinking of the philosopher-technologist today recalled it to mind.

Librarian Dave Puplett discusses the role of the librarian.

Academics must be applauded for making a stand by boycotting Elsevier. It’s time for librarians to join the conversation on the future of dissemination, but not join the boycott. | Impact of Social Sciences.

Interesting to view the librarian as midwife — very Socratic. At the Center for the Study of Interdisciplinarity (CSID), we’ve discussed the possibility of the philosopher bureaucrat before, along with what constitutes ‘real’ philosophy. What about the philosopher librarian?

A librarian should be well positioned to affect scholarly communication — for instance, she may well be involved with  Open Access policies, such as the one we recently adopted  at UNT, or be an advocate for them at her institution.

In the latter situation, the librarian will have to convince the university community that an Open Access policy is in the university’s interest. In the former situation, unless the existing policy is mandatory, it will be up to the librarian not only to disseminate information about the policy to the researchers at the institution, but also to make a case that those researchers ought to participate. In other words, the librarian will have to be able to construct an effective argument — the classic skill of the philosopher. Either the librarian will have to become a philosopher, or a philosopher will have to become the librarian.

For our other posts on Open Access, click here.