This looks like fun!
This looks like fun!
This looks like fun!
Cherry A. Murray delivered the Carey Lecture last night at this year’s AAAS Forum on S&T Policy. I want to address one aspect of her talk here — the question of transdisciplinarity (TD, which I will also use for the adjective ‘transdisciplinary’) and its necessity to address the ‘big’ questions facing us.
As far as I could tell, Murray was working with her own definitions of disciplinary (D), multidisciplinary (MD), interdisciplinary (ID), and TD. In brief, according to Murray, D refers to single-discipline approaches to a problem, ID refers to two disciplines working together on the same problem, MD refers to more than two disciplines focused on the same problem from their own disciplinary perspectives, and TD refers to more than two disciplines working together on the same problem. Murray also used the term cross-disciplinary, which she did not define (to my recollection).
All these definitions are cogent. But do we really need a different term for two disciplines working on a problem together (ID) and more than two disciplines working on a problem together (TD)? Wouldn’t it be simpler just to use ID for more than one discipline?
I grant that there is no universally agreed upon definition of these terms (D, MD, ID, and TD). But basically no one who writes about these issues uses the definitions Murray proposed. And there is something like a rough consensus on what these terms mean, despite the lack of universal agreement. I discuss this consensus, and what these definitions mean for the issue of communication (and, by extension, cooperation) between and among disciplines here:10.1007/s11229-012-0179-7.
I tend to agree that TD is a better approach to solving complex problems. But in saying this, I mean more than involving more than two disciplines. I mean involving non-academic, and hence non-disciplinary, actors in the process. It’s actually closer to the sort of design thinking that Bob Schwartz discussed in the second Science + Art session yesterday afternoon.
One might ask whether this discussion of terms is a distraction from Murray’s main point — that we need to think about solutions to the ‘big problems’ we face. I concede the point. But that is all the more reason to get our terms right, or at least to co-construct a new language for talking about what sort of cooperation is needed. There is a literature out there on ID/TD, and Murray failed to engage it. To point out that failure is not to make a disciplinary criticism of Murray (as if there might be a discipline of ID/TD, a topic I discuss here). It is to suggest, however, that inventing new terms on one’s own is not conducive to the sort of communication necessary to tackle the ‘big’ questions.
Do what you can today; help disrupt and redesign the scientific norms around how we assess, search, and filter science.
You know, I’m generally in favor of this idea — at least of the idea that we ought to redesign our assessment of research (science in the broad sense). But, as one might expect when speaking of design, the devil is in the details. It would be disastrous, for instance, to throw the baby of peer review out with the bathwater of bias.
I touch on the issue of bias in peer review in this article (coauthored with Steven Hrotic). I suggest that attacks on peer review are attacks on one of the biggest safeguards of academic autonomy here (coauthored with Robert Frodeman). On the relation between peer review and the values of autonomy and accountability, see: J. Britt Holbrook (2010). “Peer Review,” in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press: 321-32 and J. Britt Holbrook (2012). “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan. Beijing: People’s Publishing House: 328 – 62.
So, I am sorry to have missed most of the Atlanta Conference on Science and Innovation Policy. On the other hand, I wouldn’t trade my involvement with the AAAS Committee on Scientific Freedom and Responsibility for any other academic opportunity. I love the CSFR meetings, and I think we may even be able to make a difference occasionally. I always leave the meetings energized and thinking about what I can do next.
That said, I am really happy to be on my way back to the ATL to participate in the last day of the Atlanta Conference. Ismael Rafols asked me to participate in a roundtable discussion with Cassidy Sugimoto and him (to be chaired by Diana Hicks). Like I’d say ‘no’ to that invitation!
The topic will be the recent discussions among bibliometricians of the development of metrics for individual researchers. That sounds like a great conversation to me! Of course, when I indicated to Ismael that I was bascially against the idea of bibliometricians coming up with standards for individual-level metrics, Ismael laughed and said the conversation should be interesting.
I’m not going to present a paper; just some thoughts. But I did start writing on the plane. Here’s what I have so far:
Bibliometrics are now increasingly being used in ways that go beyond their design. Bibliometricians are now increasingly asking how they should react to such unintended uses of the tools they developed. The issue of unintended consequences – especially of technologies designed with one purpose in mind, but which can be repurposed – is not new, of course. And bibliometricians have been asking questions – ethical questions, but also policy questions – essentially since the beginning of the development of bibliometrics. If anyone is sensitive to the fact that numbers are not neutral, it is surely the bibliometricians.
This sensitivity to numbers, however, especially when combined with great technical skill and large data sets, can also be a weakness. Bibliometricians are also aware of this phenomenon, though perhaps to a lesser degree than one might like. There are exceptions. The discussion by Paul Wouters, Wolfgang Glänzel, Jochen Gläser, and Ismael Rafols regarding this “urgent debate in bibliometrics,” is one indication of such awareness. Recent sessions at ISSI in Vienna and STI2013 in Berlin on which Wouters et al. report are other indicators that the bibliometrics community feels a sense of urgency, especially with regard to the question of measuring the performance of individual researchers.
That such questions are being raised and discussed by bibliometricians is certainly a positive development. One cannot fault bibliometricians for wanting to take responsibility for the unintended consequences of their own inventions. But one – I would prefer to say ‘we’ – cannot allow this responsibility to be assumed only by members of the bibliometrics community.
It’s not so much that I don’t want to blame them for not having thought through possible other uses of their metrics — holding them to what Carl Mitcham calls a duty plus respicare: to take more into account than the purpose for which something was initially designed. It’s that I don’t want to leave it to them to try to fix things. Bibliometricians, after all, are a disciplinary community. They have standards; but I worry they also think their standards ought to be the standards. That’s the same sort of naivety that got us in this mess in the first place.
Look, if you’re going to invent a device someone else can command (deans and provosts with research evaluation metrics are like teenagers driving cars), you ought at least to have thought about how those others might use it in ways you didn’t intend. But since you didn’t, don’t try to come in now with your standards as if you know best.
Bibliometrics are not the province of bibliometricians anymore. They’re part of academe. And we academics need to take ownership of them. We shouldn’t let administrators drive in our neighborhoods without some sort of oversight. We should learn to drive ourselves so we can determine the rules of the road. If the bibliometricians want to help, that’s cool. But I am not going to let the Fordists figure out academe for me.
With the development of individual level bibliometrics, we now have the ability — and the interest — to own our own metrics. What we want to avoid at all costs is having metrics take over our world so that they end up steering us rather than us driving them. We don’t want what’s happened with the car to happen with bibliometrics. What we want is to stop at the level at which bibliometrics of individual researchers maximize the power and creativity of individual researchers. Once we standardize metrics, it makes it that much easier to institutionalize them.
It’s not metrics themselves that we academics should resist. ‘Impact’ is a great opportunity, if we own it. But by all means, we should resist the institutionalization of standardized metrics. A first step is to resist their standardization.
New, Open Access article just published.
Authors: Wolf, Birge; Lindenthal, Thomas; Szerencsits, Manfred; Holbrook, J. Britt; Heß, Jürgen
Source: GAIA – Ecological Perspectives for Science and Society, Volume 22, Number 2, June 2013 , pp. 104-114(11)
Currently, established research evaluation focuses on scientific impact – that is, the impact of research on science itself. We discuss extending research evaluation to cover productive interactions and the impact of research on practice and society. The results are based on interviews with scientists from (organic) agriculture and a review of the literature on broader/social/societal impact assessment and the evaluation of interdisciplinary and transdisciplinary research. There is broad agreement about what activities and impacts of research are relevant for such an evaluation. However, the extension of research evaluation is hampered by a lack of easily usable data. To reduce the effort involved in data collection, the usability of existing documentation procedures (e.g., proposals and reports for research funding) needs to be increased. We propose a structured database for the evaluation of scientists, projects, programmes and institutions, one that will require little additional effort beyond existing reporting require ments.
Keywords: DATA ASSESSMENT; DOCUMENTATION; INTERDISCIPLINARITY; ORGANIC AGRICUL TURE; PRACTICE; PRODUCTIVE INTERACTIONS; RESEARCH EVALUATION; SOCIAL/SOCIETAL IMPACT; SUSTAINABILITY; TRANSDISCIPLINARITY
Document Type: Research article
Publication date: 2013-06-01
On the one hand, this post on the VCU website is very cool. It contains some interesting observations and what I think is some good advice for researchers submitting and reviewing NSF proposals.
On the other hand, this post also illustrates how researchers’ broader impacts go unnoticed.
One of my main areas of research is peer review at S&T funding agencies, such as NSF. I especially focus on the incorporation of societal impact criteria, such as NSF’s Broader Impacts Merit Review Criterion. In fact, I published the first scholarly article on broader impacts in 2005. My colleagues at CSID and I have published more than anyone else on this topic. Most of our research was sponsored by NSF.
I don’t just perform research on broader impacts, though. I take the idea that scholarly research should have some impact on the world seriously, and I try to put it into practice. One of the things I try to do is reach out to scientists, engineers, and research development professionals in an effort to help them improve the attention to broader impacts in the proposals they are working to submit to NSF. This past May, for instance, I traveled down to Austin to give a presentation at the National Organization for Research Development Professionals Conference (NORDP 2013). You can see a PDF version of my presentation at figshare.
If you look at the slides, you may recognize a point I made in a previous post, today. That point is that ‘intellectual merit’ and ‘broader impact’ are simply different perspectives on research. I made this point at NORDP 2013, as well, as you can see from my slides. Notice how they put the point on the VCU site:
Broader Impacts are just another aspect of their research that needs to be communicated (as opposed to an additional thing that must be “tacked on”).
I couldn’t have said it better myself. Or perhaps I could. Or perhaps I did. At NORDP 2013.
Again, VCU says:
Presenters at both conferences [they refer to something called NCURA, with that hyperlink, and to NORDP, with no hyperlink] have encouraged faculty to take the new and improved criteria seriously, citing that Broader Impacts are designed to answer accountability demands. If Broader Impacts are not carefully communicated so that they are clear to all (even non-scientific types!), a door could be opened for more prescriptive national research priorities in the future—a move that would limit what types of projects can receive federal funding, and would ultimately inhibit basic research.
My point is not to claim ownership over these ideas. If I were worried about intellectual property, I could trademark a broader impacts catch phrase or something. My point is that if researchers don’t get any credit for the broader impacts of their research, they’ll be disinclined to engage in activities that might have broader impacts. I’m happy to share these ideas. How else could I expect to have a broader impact? I’ll continue to share them, even without attribution. That’s part of the code.
To clarify: I’m not mad. In fact, I’m happy to see these ideas on the VCU site (or elsewhere …). But would it kill them to add a hyperlink or two? Or a name? Or something? I’d be really impressed if they added a link to this post.
I’m also claiming this as evidence of the broader impacts of my research. I don’t have to contact any lawyers for that, do I?
UPDATE: BRIGITTE PFISTER, AUTHOR OF THE POST TO WHICH I DIRECTED MY DIATRIBE, ABOVE, HAS RESPONDED HERE. I APPRECIATE THAT A LOT. I ALSO LEFT A COMMENT APOLOGIZING FOR MY TONE IN THE ABOVE POST. IT’S AWAITING MODERATION; BUT I HOPE IT’S ACCEPTED AS IT’S MEANT — AS AN APOLOGY AND AS A SIGN OF RESPECT.
Research resources for understanding and acting on complex real-world problems
Science | Policy | Advice | Engagement
Only the unmeasured is free.
Tracking retractions as a window into the scientific process
Media, Politics, Reform
Explorations in contemplative writing
When @richvn feels like it
From Bauhaus to Beinhaus
Ireland, Italy, politics, engineering, science, translation
SV-POW! ... All sauropod vertebrae, except when we're talking about Open Access
Home for research news from my lab and posts about related science.
research education, academic writing, public engagement, funding, other eccentricities.
Paul Wouters and Sarah de Rijcke @ CWTS
technology thinking for teaching and research
Something always escapes!
Exploring Science, Explaining Evolution, Exposing Creationism
out of the archive and into the streets