Ross Mounce lays out easy steps towards open scholarship | Impact of Social Sciences

Excellent post with lots of good information here;

Easy steps towards open scholarship | Impact of Social Sciences.

There are some especially good thoughts about preprints.

Ross is right, I think, that using preprints is uncommon in the Humanities. For anyone interested in exploring the idea, I recommend the Social Epistemology Review and Reply Collective. Aside from being one of the few places to publish preprints in the Humanities, the SERRC preprints section also allows for extended responses to posted preprints, such as this one. The one major drawback (as Ross points out about sites such as Academia.edu) is that the SERRC doesn’t really archive preprints in the way that, say, a library would. Of course, if you happen to have an institutional repository, you can use that, as well.

Another site worth mentioning in this context is peerevaluation.org. I posted the same preprint on my page there. There are two interesting features of the peerevaluation.org site. One is that it uses interesting metrics, such as the ‘trust’ function. Similar to Facebook ‘likes’, but much richer, the ‘trust’ function allows users to build a visible reputation as a ‘trusted’ reviewer. What’s that, you ask? As a reviewer? Yes, and this is the second interesting feature of peerevaluation.org. It allows one to request reviews of posted papers. It also keeps track of who reviewed what. In theory, this could allow for something like ‘bottom-up’ peer review by genuine peers. One drawback of peerevaluation.org is that not enough people actually participate as reviewers. I encourage you to visit the site and serve as a reviewer to explore the possibilities.

As a humanist who would like to take advantage of preprints, both to improve my own work and for the citation advantage Ross notes, it’s difficult not to envy the situation in Physics and related areas (with arxiv). But how does such a tradition start? There are places one can use to publish preprints in the humanities. We need to start using them.

On British higher education’s Hayek appreciation club | Stian Westlake | Science | guardian.co.uk

British higher education’s Hayek appreciation club | Stian Westlake | Science | guardian.co.uk.

I think Stian Westlake is on to something here, though I think the explanation goes deeper than British academics’ secret memberships in the Hayek Appreciation Club (HAC).

Before any academic is considered for membership in the HAC, she first must become a super secret member of the super secret Humboldt Alliance (SSHA, or just HA for short). It was Humboldt, after all, who argued not only that research and teaching should be integrated in the person of the professor (a claim I support), but also that the university must be autonomous from the state (a claim I question, to a degree).

Underlying Humboldt’s demand for autonomy is a view Isaiah Berlin termed negative liberty. Briefly, negative liberty entails freedom from constraint or interference. Positive liberty, on the other hand, allows some interference insofar as such interference may actually allow us to exercise our freedom on our own terms. For those who espouse negative liberty — including not only Humboldt and Hayek, but also Popper (mentored by Hayek) and Berlin himself — autonomy means laissez faire. For those who espouse positive liberty — including Kant, Hegel, and Marx — autonomy means self-determination.

Humboldt also held the view that the state will actually benefit more if it leaves the university alone than if it attempts to direct the course of research in any way. I discuss similarities with Vannevar Bush, the father of US science policy, here. But the same argument gets recycled every time any policy maker suggests any interest in the affairs of the university.

Before there was a Hayek Appreciation Club, Hayek was a member of the Super Secret Humboldt Association. I’m pretty sure that Humboldt was also a member of the Super Double Secret Lovers of Aristotle Foundation (LAF); but that’s difficult to prove. Nevertheless, it was Aristotle who laid the foundation for Humboldt in arguing that what is done for its own sake is higher than what is done for the sake of something else. Aristotle also thought that the life of contemplation (a.k.a. philosophy) was better for the philosopher than any other life. But he didn’t take it as far as Humboldt and argue that it was also better for society.

To me, there’s a relation to this post on the CSID blog, as well.

Quick thoughts on Challenges of Measuring Social Impact Using Altmetrics

As altmetric data can detect non-scholarly, non-traditional modes of research consumption, it seems likely that parties interested in social impact assessment via social reach may well start to develop altmetric-based analyses, to complement the existing approaches of case histories, and bibliometric analysis of citations within patent claims and published guidelines.

This and other claims worth discussing appear in this hot-off-the-presses (do we need another metaphor now?) article from Mike Taylor (@herrison):

The Challenges of Measuring Social Impact Using Altmetrics – Research Trends.

In response to the quote above, my own proposal would be to incorporate altmetrics into an overall narrative of impact. In other words, rather than have something like a ‘separate’ altmetric report, I’d rather have a way of appealing to altmetrics as one form of empirical evidence to back up claims of impact.

Although it is tempting to equate social reach (i.e., getting research into the hands of the public), it is not the same as measuring social impact. At the moment, altmetrics provides us with a way of detecting when research is being passed on down the information chains – to be specific, altmetrics detects sharing, or propagation events. However, even though altmetrics offers us a much wider view of how scholarly research is being accessed and discussed than bibliometrics, at the moment the discipline lacks an approach towards understanding the wider context necessary to understand both the social reach and impact of scholarly work.

Good point about the difference between ‘social reach’ and ‘social impact’. My suggestion for developing an approach to understanding the link between social reach and social impact would be something like this: social reach provides evidence of a sort of interaction. What’s needed to demonstrate social impact, however, is evidence of behavior change. Even if one cannot establish a direct causal relation between sharing and behavior change, demonstrating that one’s research ‘reached’ someone who then changed her behavior in ways consistent with what one’s paper says would generate a plausible narrative of impact.

 

Although altmetrics has the potential to be a valuable element in calculating social reach – with the hope this would provide insights into understanding social impact – there are a number of essential steps that are necessary to place this work on the same standing as bibliometrics and other forms of assessment.

My response to this may be predictable, but here goes anyway. I am all for improving the technology. Using Natural Language Processing, as Taylor suggests a bit later, sounds promising. But I think there’s a fundamental problem with comparing altmetrics to bibliometrics and trying to bring the former up to the standards of rigor of the latter. The problem is that this view privileges technology and technical rigor over judgment. Look, let’s make altmetrics as rigorous as we can. But please, let’s not make the mistake of thinking we’ve got the question of impact resolved once altmetrics have achieved the same sort of methodological rigor as bibliometrics! The question of impact can be answered better with help from technology. But to assume that technology can answer the question on its own (as if it existed independently of human beings, or we from it), is to fall into the trap of the technological fix.

On Learning from Peer Review | Extending ‘Peers’ to Include non-Academics

Absent from the many analyses and discussions of scientific peer review are two intangible but very important byproducts: 1) feedback to the applicant and 2) exposure of the reviewers to new hypotheses, techniques, and approaches. Both of these phenomena have a virtual mentoring effect that helps move science forward. Such learning can occur as a consequence of both manuscript review and grant application review, but the review of grant applications, by its very nature, is more iterative and impacts the direction in which research moves very early in the investigation.

Opinion: Learning from Peer Review | The Scientist Magazine®.

There are at least two funding agencies that recognize this phenomenon in the actual design of their peer review processes, so they deserve mention. The idea is to include non-academics as peer reviewers precisely to effect the sort of co-production of knowledge the article above suggests.

The first is STW, the Dutch Technology Foundation. I outline their peer review process in this article, available Open Access.

The second is the US Congressionally Directed Medical Research Program. Details of their peer review process are available on their website here.

Reinventing the Wheel, Again

What happens when someone who reads Nietzsche also reads science and technology policy documents? Click on the link to see one answer.

Negative Results – Not Just For Journal Articles?

David Bruggeman offers another twist on turning negatives into positives here. I’d like to add to this that it’s part of an ethos of not being afraid to make mistakes, even of valuing them. Some might refer to this as an entrepreneurial attitude.

David Bruggeman's avatarPasco Phronesis

There is a strong positive bias in how scientific knowledge is generated, written about, and measured.  It is easier to find research proving a hypothesis than replication studies that fail to confirm earlier findings.  It is easier to access explanations of why certain technologies came to be than studies about why we don’t have flying cars, or some other breakthrough promised to us through the magnificence of science and technology.  It’s an enormous hole in our understanding of the world, facilitated by the mores of the scientific reward system.

The same is true for metrics.  While the number of ways one can assess the impact of a particular paper is changing, many of the ‘alt’ metrics emerging are still thinking primarily in positive terms.  At least that’s the proposition of J. Britt Holbrook and some of his colleagues at the University of North Texas.  In a letter to Nature

View original post 185 more words

San Francisco Declaration on Research Assessment — Well done, DORA

Anyone interested in research assessment should read this with care.

DORA.

It’s been presented in the media as an insurrection against the use of the Journal Impact Factor — and the Declaration certainly does … ehr … declare that the JIF shouldn’t be used to assess individual researchers or individual research articles. But this soundbite shouldn’t be used to characterize the totality of DORA, which is much broader than that.

Honestly, it took me a few days to go read it. After all, it’s uncontroversial in my mind that the JIF shouldn’t be used in this way. So, an insurrection against it didn’t strike me as all that interesting. I’m all for the use of altmetrics and — obviously, given our recent Nature correspondence (free to read here) — other inventive ways to tell the story of our impact.

But, and I cannot stress this enough, everyone should give DORA a careful read. I’m against jumping uncritically on the bandwagon in favor of Openness in all its forms. But I could find little reason not to sign, and myriad reasons to do so.

Well done, DORA.

‘Pure hype of pure research helps no one’ says Sarewitz; what this says about freedom

I tend to agree with a lot of what Dan Sarewitz argues here:

Pure hype of pure research helps no one : Nature News & Comment.

But I also want to suggest that there’s an argument to be made against the High Quality Research Act that goes beyond Sarewitz’s claim that it helps no one.

To be fair, that’s just the headline. Sarewitz also claims something I think is a bit more controversial — that the HQRA is really nothing to get too worried about. Not only does it help no one, but also it doesn’t hurt anyone.

This strikes me as mistaken. I’ll try to articulate why in terms of the distinction between negative and positive freedom I’ve been exploring. Here goes.

First, I agree that the HQRA helps no one.  But it’s not just that the HQRA is redundant — though this is certainly true. It’s also that it doesn’t allow us to do anything more to demonstrate our accountability, as I think the Broader Impacts Criterion does. In other words, it doesn’t increase anyone’s positive freedom.

Second, it actually decreases our negative freedom. By requiring NSF to re-certify what the merit review process already certifies (at least when it’s working as designed), this ‘added layer of accountability’ actually just increases the kind of bureaucratic red tape we should be trying to decrease if we’re interested in an efficient government. This makes about as much sense as the Florida Blue Ribbon Task Force’s suggestion to charge more for classes in majors that supposedly won’t result in better jobs for graduates. Majors that result in higher paying jobs actually should be in greater demand, and so should cost more, not less. But not according to the Blue Ribbon Task Force (see pp 22-3).

Finally, I think the HQRA might be a case study in how to reconcile notions of positive and negative freedom — or at least how to think of both ideas of liberty as possibly working together. It’s sort of a test. Sometimes, a policy that might increase our positive freedom can be seen as decreasing our negative freedom. I think the NSF’s Broader Impacts Criterion is a case in point. Yes, it places an additional burden on researchers, and so in this sense it is a limitation on their negative freedom. But it also increases their positive freedom, so that a trade-off is possible. The HQRA, on the other hand, decreases our negative freedom without also increasing our positive freedom. Any policy that doesn’t increase either our positive or our negative freedom is highly questionable — or so I am suggesting.

Will The High Quality Research Act Diminish Our Collective Cognitive Dissonance?

I plan to follow up on David’s post and Dan’s argument in Nature, soon. Until then, enjoy this!

David Bruggeman's avatarPasco Phronesis

The High Quality Research Act is a draft bill from Representative Lamar Smith, Chair of the House Science, Space and Technology Committee.  Still not officially introduced, it has prompted a fair amount of teeth gnashing and garment rending over what it might mean.  The bill would require the Director of the National Science Foundation (NSF) to certify that the research it funds would: serve the national interests, be of the highest quality, and is not duplicative of other research projects being funded by the federal government.  The bill would also prompt a study to see how such requirements could be implemented in other federal science agencies.

There’s a lot there to explore, including how the bill fits into recent inquiries about specific research grants made by the National Institutes of Health (NIH) and the NSF.  (One nice place to check on this is the AmericanScience team blog.)

But…

View original post 278 more words

Altmetrics for the Nature correspondence on negative metrics of impact

Fascinating.

Article details.