This should be read along with Paul Wouter’s post (below). Lots of confusion surrounding Journal Impact Factor, I think.
Category Archives: Autonomy and Accountability
The evidence on the Journal Impact Factor
With the release of the new Journal Impact Factors, everyone should read this blog posted by Paul Wouters at “The Citation Culture.”
The San Francisco Declaration on Research Assessment (DORA), see our most recent blogpost, focuses on the Journal Impact Factor, published in the Web of Science by Thomson Reuters. It is a strong plea to base research assessments of individual researchers, research groups and submitted grant proposals not on journal metrics but on article-based metrics combined with peer review. DORA cites a few scientometric studies to bolster this argument. So what is the evidence we have about the JIF?
In the 1990s, the Norwegian researcher Per Seglen, based at our sister institute the Institute for Studies in Higher Education and Research (NIFU) in Oslo and a number of CWTS researchers (in particular Henk Moed and Thed van Leeuwen) developed a systematic critique of the JIF, its validity as well as the way it is calculated (Moed & Van Leeuwen, 1996; Moed & Leeuwen, 1995; Seglen, 1997). This line of research…
View original post 1,366 more words
Other infrequently asked questions about impact
Here are some other infrequently asked questions about impact that didn’t make it into the final cut of my piece at the LSE Impact of Social Sciences Blog.
Why conflate impact with benefit?
Put differently, why assume that all impacts are positive or benefits to society? Obviously, no one wants publicly supported research not to benefit the public. It’s even less palatable to consider that some publicly supported research may actually harm the public. But it’s wishful thinking to assume that all impacts are beneficial. Some impacts that initially appear beneficial may have negative consequences. And seemingly negative indicators might actually show that one is having an impact – even a positive one. I discuss this point with reference to Jeffrey Beall, recently threatened with a $1 billion lawsuit, here.
The question of impact is an opportunity to discuss such issues, rather than retreating into the shelter of imagined value-neutrality or objectivity. It was to spark this discussion that we generated a CSID-specific list – it is purposely idiosyncratic.
How can we maximize our impact?
I grant that ‘How can we maximize our impact?’ is a logistical question; but it incorporates a healthy dose of logos. Asking how to maximize our impacts should appeal to academics. We may be choosey about the sort of impact we desire and on whom; but no one wants to have minimal impact. We all desire to have as much impact as possible. Or, if we don’t, please get another job and let some of us who do want to make a difference have yours.
Wherefore impact?
For what reason are we concerned with the impact of scholarly communication? It’s the most fundamental question we should be asking and answering. We need to be mindful that whatever metrics we devise will have a steering effect on the course of scholarly communications. If we are going to steer scholarly communications, then we should discuss where we plan to go – and where others might steer us.
Developing indicators of the impact of scholarly communication is a massive technical challenge – but it’s also much simpler than that | Impact of Social Sciences
Postmodern Research Evaluation? | 1 of ?
This will be the first is a series of posts tagged ‘postmodern research evaluation’ — a series meant to be critical and normative, expressing my own, subjective, opinions on the question.
Before I launch into any definitions, take a look at this on ‘Snowball Metrics‘. Reading only the first few pages should help orient you to where I am coming from. It’s a place from where I hope to prevent such an approach to metrics from snowballing — a good place, I think, for a snowball fight.
Read the opening pages of the snowball report. If you cannot see this as totalizing — in a very bad way — then we see things very differently. Still, I hope you read on, my friend. Perhaps I still have a chance to prevent the avalanche.
Broader Impacts and Intellectual Merit: Paradigm Shift? | NOT UNTIL YOU CITE US!
On the one hand, this post on the VCU website is very cool. It contains some interesting observations and what I think is some good advice for researchers submitting and reviewing NSF proposals.
Broader Impacts and Intellectual Merit: Paradigm Shift? | CHS Sponsored Programs.
On the other hand, this post also illustrates how researchers’ broader impacts go unnoticed.
One of my main areas of research is peer review at S&T funding agencies, such as NSF. I especially focus on the incorporation of societal impact criteria, such as NSF’s Broader Impacts Merit Review Criterion. In fact, I published the first scholarly article on broader impacts in 2005. My colleagues at CSID and I have published more than anyone else on this topic. Most of our research was sponsored by NSF.
I don’t just perform research on broader impacts, though. I take the idea that scholarly research should have some impact on the world seriously, and I try to put it into practice. One of the things I try to do is reach out to scientists, engineers, and research development professionals in an effort to help them improve the attention to broader impacts in the proposals they are working to submit to NSF. This past May, for instance, I traveled down to Austin to give a presentation at the National Organization for Research Development Professionals Conference (NORDP 2013). You can see a PDF version of my presentation at figshare.
If you look at the slides, you may recognize a point I made in a previous post, today. That point is that ‘intellectual merit’ and ‘broader impact’ are simply different perspectives on research. I made this point at NORDP 2013, as well, as you can see from my slides. Notice how they put the point on the VCU site:
Broader Impacts are just another aspect of their research that needs to be communicated (as opposed to an additional thing that must be “tacked on”).
I couldn’t have said it better myself. Or perhaps I could. Or perhaps I did. At NORDP 2013.
Again, VCU says:
Presenters at both conferences [they refer to something called NCURA, with that hyperlink, and to NORDP, with no hyperlink] have encouraged faculty to take the new and improved criteria seriously, citing that Broader Impacts are designed to answer accountability demands. If Broader Impacts are not carefully communicated so that they are clear to all (even non-scientific types!), a door could be opened for more prescriptive national research priorities in the future—a move that would limit what types of projects can receive federal funding, and would ultimately inhibit basic research.
Unless someone else is starting to sound a lot like us, THIS IS OUR MESSAGE!
My point is not to claim ownership over these ideas. If I were worried about intellectual property, I could trademark a broader impacts catch phrase or something. My point is that if researchers don’t get any credit for the broader impacts of their research, they’ll be disinclined to engage in activities that might have broader impacts. I’m happy to share these ideas. How else could I expect to have a broader impact? I’ll continue to share them, even without attribution. That’s part of the code.
To clarify: I’m not mad. In fact, I’m happy to see these ideas on the VCU site (or elsewhere …). But would it kill them to add a hyperlink or two? Or a name? Or something? I’d be really impressed if they added a link to this post.
I’m also claiming this as evidence of the broader impacts of my research. I don’t have to contact any lawyers for that, do I?
UPDATE: BRIGITTE PFISTER, AUTHOR OF THE POST TO WHICH I DIRECTED MY DIATRIBE, ABOVE, HAS RESPONDED HERE. I APPRECIATE THAT A LOT. I ALSO LEFT A COMMENT APOLOGIZING FOR MY TONE IN THE ABOVE POST. IT’S AWAITING MODERATION; BUT I HOPE IT’S ACCEPTED AS IT’S MEANT — AS AN APOLOGY AND AS A SIGN OF RESPECT.
Open Access and Its Enemies
I was thrilled to be invited to participate as a speaker in the University of North Texas Open Access Symposium 2013. It’s ongoing, and it’s being recorded; video of the presentations will be available soon. In the meantime, I’ve posted slides from my presentation on figshare.
I thought I’d add some thoughts here expounding on my presentation a bit and relating it to presentations given by my fellow panelists. I’m a proponent of open access, for several reasons. I think closed access, that is, encountering a pay wall when one goes to download a piece of research on is interested in reading, is unjust as well as inconvenient. The case for this claim can best be made with reference to two main points revolving around the question of intellectual property rights. Generally, in the case of closed access publications, authors are asked to sign away many, if not all, of their copy rights. Now, authors are free to negotiate terms with publishers, and we are free not to sign away our copy rights — but often the only choice with which we are left by many publishers is simply to take our work and publish it somewhere else.
Many otherwise ‘closed’ publishers will allow authors to retain all their copy rights for a fee (which varies from publisher to publisher) — this is known as the ‘author pays’ model of Gold OA (the latter term refers to OA publications in journals, as opposed to publications made OA via some sort of repository, which is known as Green OA). There is probably no better source for learning the terminology surrounding OA than Peter Suber’s website.
There is also the argument that when publicly funded research is published, the public should at least have free (gratis) access to the publication. Some publishers have argued against this on the grounds that they add value to the research by running the peer review process and formatting and archiving the article. They do perform these services, which do cost money (though peer review itself is done for free by academics). So, they argue that simply giving away their labor is unjust. If it is unjust to have the public pay again and unjust to ask publishers to give away the results of their labors, then, many argue, the ‘author pays’ model of OA makes the most sense. This, of course, ignores the fact of the free labor of academics in conducting peer review. (The labor of actually writing articles is arguably covered as part of an author’s base salary.) But even if authors are already paid to write the articles, it doesn’t follow that it’s just to ask them to pay again to have the articles made freely available once they are published.
Publishers, including Sage, are experimenting with different versions of the ‘author pays’ model of Gold OA. Jim Gilden was another member of my panel. He discussed Sage’s foray into OA, some of their innovations (including the interesting idea of having article-level editors who run the peer review process for individual articles, rather than for the journal as a whole), and some of the difficulties they have encountered. Among those difficulties is some sort of prejudice among potential authors — and members of promotion and tenure committees — against OA journals. This surprised me a little, but perhaps it should not have. One of the themes of my own talk is that ‘we’ academics are included among the enemies of open access. Our prejudice against OA publications is one indicator of this fact.
The other member of our panel was Jeffrey Beall, best known for Beall’s List of Predatory Open Access Publishers. Jeffrey talked about his list, including how and why it got started. That story is pretty simple: he started getting spam emails from publishers that didn’t quite feel right; as a cataloger, he did what came naturally and started keeping track; thus, Beall’s List. Things got more complicated after that. Many publishers appearing on Beall’s list are none too happy about it. Some have even threatened to sue Jeffrey — one for the sum of $1 billion! There are other, less publicized, sources of friction Jeffrey has encountered. He’s not too popular with his own university’s external/community relations folks. And he’s subject to a negative portrayal by many advocates of open access, who don’t appreciate the negative attention Beall’s list draws to the open access movement.
Criticism of Beall from publishers on his list is to be expected. In fact, it was serendipitous that I wrapped up the panel and ended my presentation with the slide of CSID’s list of ‘56 indicators of impact‘ — a list that includes negative indicators, such as provoking lawsuits. Jeffrey serves as a very good example of the sort of thing we are getting at with our list. The most important fact is that he has a narrative to account for why getting sued for $1 billion actually indicates that he’s having an impact. Unless a publisher were worried that Beall’s list would hurt their business, why would they threaten to sue?
Jeffrey and Jim were both excellent panel-mates for another reason. All three of us are not exactly full-fledged members of the open access enthusiasts club. Beall can’t be included, since his list can be interpreted as portraying not only specific publishers, but also the whole OA movement, in a negative light. Gilden can’t be included, since, well, he works for a for-profit-publisher. Those folks tend to be seen as more or less evil by many of the members of the OA crowd. (It was interesting to me to see the folks at Mendeley trying to — and having to — defend themselves on Twitter after Mendeley was bought by Elsevier, the evilist of all evil publishers.) And I? Well, as I said at the beginning of this post, I am an advocate of open access. But I am not an uncritical advocate, and I argue that a greater critical spirit needs to be embraced by many OA enthusiasts.
This was, in essence, the point of my talk. The text parts are pretty clear, I think. So let me focus here mostly on the images, and especially on the ‘Images of Impact’ slides. First, I explained how I derived my title from Popper’s The Open Society and Its Enemies. This seemed fitting not only because of the play on words, but also because I have come to see much of the struggle surrounding open access in terms of different conceptions of liberty or freedom. Popper’s emphasis on individual liberty was something I wanted to expand on, and I also linked it with Isaiah Berlin’s account of positive and negative conceptions of liberty. I also think Popper has an ambiguous relation to Neoliberalism. Popper was an original member of the Mont Pèlerin thought collective that many credit with the development and dissemination of Neoliberalism.
That Popper’s relation to Neoliberalism is unclear is an important point — and it’s another reason I chose him to introduce my talk. Part of what I wanted to suggest was that much of the open access movement is susceptible to being subsumed under a neoliberal agenda. After all, both use similar vocabularies — references to openness, to crowds, and to efficiency abound in both movements.
I didn’t really dwell on this point for long, though, in deference to the Symposium’s keynote speaker’s views on ‘neoliberalism‘. At the same time, I did want at least to reference Neoliberalism as one thing members of the open access movement need to be more aware of. I’m worried there’s something like a dogmatic enthusiasm that’s creeping into the OA crowd. Many of the reactions from within the OA enthusiast club against Jeffrey Beall (or against Mendeley) seem to me to betray an uncritical (and I mean un-self-critical) attitude. Similarly, I think it would be better for OA enthusiasts to examine carefully and to think critically about OA mandates and policies being considered now. Most, I fear, only think in terms like ‘any movement in the direction of more open access is good’. I just don’t believe that. In fact, I think it’s dangerous to think that way.
Sorry — on to my images of impact. I love altmetrics. I think that’s where you find many of the brightest advocates of open access. I also think the development of altmetrics is one of the areas most fraught with peril. After all, given the penchant of neoliberals for measurement-for-management-for-efficiency that goes by the name of ‘accountability’, it’s not difficult to see how numbers in general, and altmetrics in particular, might be co-opted by someone who wanted to do away with peer review and the protection that provides to the scholarly community. Talk of open, transparent, accountable government sounds great. But come on, folks, let’s please think about what that means. That drones are part of that plan ought to give us all pause. Altmetrics are the drones of the OA movement.
This is by no means to say that altmetrics are bad. I love altmetrics. I have said publicly that I think every journal should employ some form of article level metrics. They’re amazing. But they are also ripe for abuse — by publishers, by governments, and by academic administrators, among others. I just want altmetrics developers to recognize that possibility and to give it careful thought.
The development of altmetrics is not simply a technical issue. Nor are technologies morally or politically neutral. I suggested that we consider altmetrics (and perhaps OA in general) as a sociotechnical imaginary. I think the concept fits well here, especially linked to the idea of OA as a movement that entails an idea of positive freedom. There is a vision of the good associated with OA. Technology is supposed to help us along the road to achieving that good. Government policies are being enacted that may help. But we need to think critically about all of this rather than rushing forward in a burst of enthusiasm.
The great danger of positive freedom is that it can lead to coercion and even totalitarianism. The question is whether we can place a governor on our enthusiasm and limit our pursuit if positive freedom in a way that still allows for autonomy. I refer to Philip Petitt‘s notion of non-domination as potentially useful in this context. I also suggest that narrative can play a governing role. I do think we need some sort of localized (not totalizing) metanarrative about the relationship between the university and society (this is what I referred to in my talk in terms of a ‘republic of knowledge’). But narrative must also serve the role of de-totalizing in another sense. Narratives should be tied to articles and accompany article level metrics. We need to put the ‘account’ back into accountability, rather than simply focusing on the idea of counting.
So, to sum it all up: OA is good, but not an unqualified good; altmetrics are great, but they need to be accompanied by narratives. The end.
Nigel Warburton’s negative vision of what philosophy isn’t
Philosopher Nigel Warburton, of philosophy bites fame, has just resigned his academic post at the Open University to pursue other opportunities. The Philosopher’s Magazine conducts an extended interview with Warburton here. Much of what he reveals in this interview is both entertaining and, in my opinion, true.
But one aspect of the interview especially caught my attention. After offering several criticisms of academic philosophy today with which I’m in total agreement (in particular the tendency of hiring committees to hire clones of themselves rather than enhancing the diversity of the department), Warburton offers what he seems to view as the ultimate take down of academic philosophy. I quote this section in full, below. If you’ve been paying any attention to this blog or our posts at CSID, you’ll understand why, immediately.
He reserves particular venom for the REF, the Research Excellence Framework, a system of expert review which assesses research undertaken in UK higher education, which is then used to allocate future rounds of funding. A lot of it turns on the importance of research having a social, economic or cultural impact. It’s not exactly the sort of thing that philosophical reflection on, say, the nature of being qua being is likely to have. He leans into my recorder to make sure I get every word:
“One of the most disturbing things about academic philosophy today is the way that so many supposed gadflies and rebels in philosophy have just rolled over in the face of the REF – particularly by going along with the idea of measuring and quantifying impact,” he says, making inverted commas with his fingers, “a technical notion which was constructed for completely different disciplines. I’m not even sure what research means in philosophy. Philosophers are struggling to find ways of describing what they do as having impact as defined by people who don’t seem to appreciate what sort of things they do. This is absurd. Why are you wasting your time? Why aren’t you standing up and saying philosophy’s not like that? To think that funding in higher education in philosophy is going to be determined partly by people’s creative writing about how they have impact with their work. Just by entering into this you’ve compromised yourself as a philosopher. It’s not the kind of thing that Socrates did or that Hume did or that John Locke did. Locke may have had patrons, but he seemed to write what he thought rather than kowtowing to forces which are pushing on to us a certain vision, a certain view of what philosophical activities should be. Why are you doing this? I’m getting out. For those of you left in, how can you call yourselves philosophers? This isn’t what philosophy’s about.”
Please tell us how you really feel, Dr. Warburton.
In the US, we are not subject to the REF. But we are subject to many, many managerial requirements, including, if we seek grant funding, the requirement that we account for the impact of our research. We are, of course, ‘free’ to opt out of this sort of requirement simply by not seeking grant funding. Universities in the UK, however, are not ‘free’ to opt out of the REF. So, are the only choices open to ‘real’ philosophers worthy of the name resistance or removing oneself from the university, as Warburton has chosen?
I think not. My colleagues and I recently published an article in which we present a positive vision of academic philosophy today. A key aspect of our position is that the question of impact is itself a philosophical, not merely a technical, problem. Philosophers, in particular, should own impact rather than allowing impact to be imposed on us by outside authorities. The question of impact is a case study in whether the sort of account of freedom as non-domination offered by Pettit can be instantiated in a policy context, in addition to posited in political philosophy.
Being able to see impact as a philosophical question rests on being able to question the idea that the only sort of freedom worth having is freedom from interference. If philosophy matters to more than isolated individuals — even if connected by social media — then we have to realize that any philosophically rich conception of liberty must also include responsibility to others. Our notion of autonomy need not be reduced to the sort of non-interference that can only be guaranteed by separation (of the university from society, as Humboldt advocated, or of the philosopher from the university, as Warburton now suggests). Autonomy must be linked to accountability — and we philosophers should be able to tackle this problem without being called out as non-philosophers by someone who has chosen to opt out of this struggle.
Ross Mounce lays out easy steps towards open scholarship | Impact of Social Sciences
Excellent post with lots of good information here;
Easy steps towards open scholarship | Impact of Social Sciences.
There are some especially good thoughts about preprints.
Ross is right, I think, that using preprints is uncommon in the Humanities. For anyone interested in exploring the idea, I recommend the Social Epistemology Review and Reply Collective. Aside from being one of the few places to publish preprints in the Humanities, the SERRC preprints section also allows for extended responses to posted preprints, such as this one. The one major drawback (as Ross points out about sites such as Academia.edu) is that the SERRC doesn’t really archive preprints in the way that, say, a library would. Of course, if you happen to have an institutional repository, you can use that, as well.
Another site worth mentioning in this context is peerevaluation.org. I posted the same preprint on my page there. There are two interesting features of the peerevaluation.org site. One is that it uses interesting metrics, such as the ‘trust’ function. Similar to Facebook ‘likes’, but much richer, the ‘trust’ function allows users to build a visible reputation as a ‘trusted’ reviewer. What’s that, you ask? As a reviewer? Yes, and this is the second interesting feature of peerevaluation.org. It allows one to request reviews of posted papers. It also keeps track of who reviewed what. In theory, this could allow for something like ‘bottom-up’ peer review by genuine peers. One drawback of peerevaluation.org is that not enough people actually participate as reviewers. I encourage you to visit the site and serve as a reviewer to explore the possibilities.
As a humanist who would like to take advantage of preprints, both to improve my own work and for the citation advantage Ross notes, it’s difficult not to envy the situation in Physics and related areas (with arxiv). But how does such a tradition start? There are places one can use to publish preprints in the humanities. We need to start using them.
On British higher education’s Hayek appreciation club | Stian Westlake | Science | guardian.co.uk
British higher education’s Hayek appreciation club | Stian Westlake | Science | guardian.co.uk.
I think Stian Westlake is on to something here, though I think the explanation goes deeper than British academics’ secret memberships in the Hayek Appreciation Club (HAC).
Before any academic is considered for membership in the HAC, she first must become a super secret member of the super secret Humboldt Alliance (SSHA, or just HA for short). It was Humboldt, after all, who argued not only that research and teaching should be integrated in the person of the professor (a claim I support), but also that the university must be autonomous from the state (a claim I question, to a degree).
Underlying Humboldt’s demand for autonomy is a view Isaiah Berlin termed negative liberty. Briefly, negative liberty entails freedom from constraint or interference. Positive liberty, on the other hand, allows some interference insofar as such interference may actually allow us to exercise our freedom on our own terms. For those who espouse negative liberty — including not only Humboldt and Hayek, but also Popper (mentored by Hayek) and Berlin himself — autonomy means laissez faire. For those who espouse positive liberty — including Kant, Hegel, and Marx — autonomy means self-determination.
Humboldt also held the view that the state will actually benefit more if it leaves the university alone than if it attempts to direct the course of research in any way. I discuss similarities with Vannevar Bush, the father of US science policy, here. But the same argument gets recycled every time any policy maker suggests any interest in the affairs of the university.
Before there was a Hayek Appreciation Club, Hayek was a member of the Super Secret Humboldt Association. I’m pretty sure that Humboldt was also a member of the Super Double Secret Lovers of Aristotle Foundation (LAF); but that’s difficult to prove. Nevertheless, it was Aristotle who laid the foundation for Humboldt in arguing that what is done for its own sake is higher than what is done for the sake of something else. Aristotle also thought that the life of contemplation (a.k.a. philosophy) was better for the philosopher than any other life. But he didn’t take it as far as Humboldt and argue that it was also better for society.
To me, there’s a relation to this post on the CSID blog, as well.