Here is the postprint of a forthcoming article (part of a special section) in Journal of Responsible Innovation.
First, I’ve been away from my own blog for far too long. My apologies. Second, no more ‘Press This’?! Ugh. So, here is a LINK to the full program of PPN 2018.
Most of these thoughts were generated during the workshop run by Paul Thompson on day 1 on ‘Evaluating Public Philosophy as Academic Scholarship’. This issue is important for everyone who would like to see public philosophy succeed; but it is vitally important for those of us on the tenure track, since not being able to evaluate public philosophy as academic scholarship often means that it is reduced to a ‘service’ activity. Service, of course, is seen as even less important than teaching, which is often seen as less important than research. This hierarchy may be altered at small liberal arts colleges or others that put special emphasis on teaching. Generally speaking, though, one’s research rules in tenure decisions. I’ve never heard, or even heard of, any advice along the lines of ‘Do more teaching and publish less’ or ‘make sure you get on more committees or peer review more journal manuscripts’. Whereas ‘Just publish more’ is something I hear frequently.
So, it’s vitally important to be able to evaluate public philosophy as academic scholarship.
I want to add that, although many of these ideas were not my own and came from group discussion, I am solely responsible for the way I put them here. I may mess up, but no one else should be blamed for my mistakes. What follows isn’t quite the ‘Survival Guide’ that Michael O’Rourke suggested developing. Instead, it is a list of things I (and perhaps others) would like to see coming from PPN. (This may change what PPN is, of course. Can a network that meets once in while provide these things?)
- A statement on the value of public philosophy as academic scholarship. [EDIT: The expression of this need came up at the workshop, but no one there mentioned that such a statement already exists HERE from the APA. Thanks to Jonathan Ellis and Kelly Parker for help in finding it! Apologies to APA for my ignorance.]
- A list of scholarly journals that are public philosophy friendly (i.e., where one can submit and publish work that includes public philosophy). The list would need to be curated so that new journals can be added and old ones removed when they fit or don’t fit the bill.
- A list of tools for making the case for the value of public philosophy. I have in mind things like altmetrics (see HERE or HERE or HERE), but it could also include building capacity among a set of potential peers who could serve as reviewers for public philosophy scholarship.
- Of course, developing a cohort of peers will mean developing a set of community standards for what counts as good public philosophy. I wouldn’t want that imposed from above (somewhere?) and think this will arise naturally if we are able to foster the development of the community.
- Some sort of infrastructure for networking. It’s supposedly a network, right? Is there anywhere people can post profiles?
- A repository of documents related to promotion and tenure in public philosophy. Katie Plaisance described how she developed a memorandum of understanding detailing the fact that her remarkably collaborative work deserved full credit as research, despite the fact that she works in a field that seems to value sole-authorship to the detriment of collaborative research. Katie was awesome and said she would share that document with me. But what if she (or everyone) who did smart and cool things like this to help guarantee their ability to do public philosophy had a central repository where all these documents could be posted for everyone to view and use? What if departments that have good criteria for promotion and tenure — criteria that allow for or even encourage public philosophy as scholarship — could post them on such a repository as resources for others?
- Leadership! Developing and maintaining these (and no doubt others I’ve missed) resources will require leadership, and maybe even money.
I’d be interested in thoughts on this list, including things you think should be added to it.
The tracking of the use of research has become central to the measurement of research impact. While historically this tracking has meant using citations to published papers, the results are old, biased, and inaccessible – and stakeholders need current data to make funding decisions. We can do much better. Today’s users of research interact with that research online. This leaves an unprecedented data trail that can provide detailed data on the attention that specific research outputs, institutions, or domains receive.
However, while the promise of real time information is tantalizing, the collection of this data is outstripping our knowledge of how best to use it, our understanding of its utility across differing research domains and our ability to address the privacy and confidentiality issues. This is particularly true in the field of Humanities and Social Sciences, which have historically been under represented in the collection of scientific corpora of citations, and which are now under represented by the tools and analysis approaches being developed to track the use and attention received by STM research outputs.
We will convene a meeting that combines a discussion of the state of the art in one way in which research impact can be measured – Article Level and Altmetrics – with a critical analysis of current gaps and identification of ways to address them in the context of Humanities and Social Sciences.
We would like to announce that Altmetric have begun tracking mentions of academic articles on Chinese microblogging site Sina Weibo, and the data will shortly be fully integrated into existing Altmetric tools.
The mentions collated will be visible to users via the Altmetric Explorer, a web-based application that allows users to browse the online mentions of any academic article, and, where appropriately licensed, via the article metrics data on publisher platforms.
Launched in 2009, Sina Weibo has become one of the largest social media sites in China, and is most often likened to Twitter. Integrating this data means that Altmetric users will now be able to see a much more global view of the attention an article has received. Altmetric is currently the only article level metrics provider to offer this data.
I’m looking forward to moderating a panel on day 1 of the AAAS Forum on Science and Technology Policy.
2:00 Current Issues in S&T Policy (Breakout Sessions)(A) Measuring the Impacts of Science
• What are the policy relevant challenges, tools, and approaches to measuring the social impact of scientific research? • How can improved indicators capture change in science, technology, and innovation? • Are altmetrics the solution to measuring social impacts?Moderator: J. Britt Holbrook, Visiting Assistant Professor, School of Public Policy, Georgia Institute of Technology; and Member, AAAS Committee on Scientific Freedom and ResponsibilityKaye Husbands Fealing, Professor, Center for Science, Technology and Environmental Policy, Humphrey School of Public Affairs, University of Minnesota; Senior Study Director, National Academy of Sciences, Committee on National Statistics; and Member, AAAS Committee on Science, Engineering, and Public PolicyGil Omenn, Director, Center for Computational Medicine and Bioinformatics, University of MichiganMike Taylor, Research Specialist, Elsevier Labs
Thanks to one of my students — Addison Amiri — for pointing out this piece by @Richvn.
To what degree is quantity being substituted for quality in today’s research assessment exercises? This strikes me as a symptom of the overvaluation of efficiency.
Higgs said he became \”an embarrassment to the department when they did research assessment exercises\”. A message would go around the department saying: \”Please give a list of your recent publications.\” Higgs said: \”I would send back a statement: \’None.\’ \”
Thanks to Lance Weihmuller for pointing me to the article.
This is definitely worth a look, whether you’re into the idea of post-publication peer review or not.
Impact Story is one of the two altmetrics tools that allow individual researchers to find out something about the social media buzz surrounding their activities; the other is Altmetric.com. Although other developers exist, I can’t seem to figure out how I, as an individual, can use their tools (I’m looking at you, Plum Analytics).
There are a few major differences between Impact Story and Altmetric.com from a user standpoint. First, Impact Story is not for profit, while Altmetric.com is a business. Second, Impact Story steers one to create a collection of products that together tell a story of one’s impact. Altmetric.com, on the other hand, steers one to generate figures for the impact of individual products. Third, Impact Story allows for a range of products, including those tagged with URLs as well as DOIs; Altmetric.com only works with DOIs. This means that Impact Story can gather info on things like blog posts, while Altmetric.com is focused on scholarly articles. Finally, and this is a big difference, Impact Story deemphasizes numbers, while Atlmetric.com assigns a number, the Altmetric score, to each product.
Here is my latest Impact Story.
It’s interesting to see how Impact Story and Altmetric differ, both in their approaches and in terms of what they find on the same products.
I think the implications of these tools are enormous. I’d be interested to hear your thoughts!
Do what you can today; help disrupt and redesign the scientific norms around how we assess, search, and filter science.
You know, I’m generally in favor of this idea — at least of the idea that we ought to redesign our assessment of research (science in the broad sense). But, as one might expect when speaking of design, the devil is in the details. It would be disastrous, for instance, to throw the baby of peer review out with the bathwater of bias.
I touch on the issue of bias in peer review in this article (coauthored with Steven Hrotic). I suggest that attacks on peer review are attacks on one of the biggest safeguards of academic autonomy here (coauthored with Robert Frodeman). On the relation between peer review and the values of autonomy and accountability, see: J. Britt Holbrook (2010). “Peer Review,” in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press: 321-32 and J. Britt Holbrook (2012). “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan. Beijing: People’s Publishing House: 328 – 62.