On Rubrics

Faculty Development

This semester I’m attending a series of Faculty Development Workshops at NJIT designed to assist new faculty with such essentials as teaching, grant writing, publishing, and tenure & promotion.

I’m posting here now in hopes of getting some feedback on a couple of rubrics I developed after attending the second such workshop.

I’m having students give group presentations in my course on Sports, Technology, and Society, and I was searching for ways to help ensure that all members contributed to the group presentation, as well as to differentiate among varying degrees of contribution. Last Tuesday’s workshop focused on assessment, with some treatment of the use of rubrics for both formative and summative assessment. I did a bit more research on my own, and here’s what I’ve come up with.

First, I developed a two-pronged approach. I want to be able to grade the presentation as a whole, as well as each individual’s contribution to that presentation. I decided to make the group presentation grade worth 60% and the individual contribution grade worth 40% of the overall presentation grade.

Second, I developed the group presentation rubric. For this, I owe a debt to several of the rubrics posted by the Eberly Center at Carnegie Mellon University. I found the rubrics for the philosophy paper and the oral presentation particularly helpful. I am thinking about using this rubric both for formative evaluation (to show the students what I expect), as well as for summative evaluation (actually grading the presentations).

Third, I developed the individual peer assessment rubric. I would actually have the students anonymously fill out one of these for each of their fellow group members. For this rubric, I found a publication from the University of New South Wales to be quite helpful (especially Table 2).

I’d be quite interested in constructive feedback on this approach.

Publishers withdraw more than 120 gibberish papers : Nature News & Comment

Publishers withdraw more than 120 gibberish papers : Nature News & Comment.

Thanks to one of my students — Addison Amiri — for pointing out this piece by @Richvn.

What a difference a day makes: How social media is transforming scientific debate (with tweets) · deevybee · Storify

This is definitely worth a look, whether you’re into the idea of post-publication peer review or not.

What a difference a day makes: How social media is transforming scientific debate (with tweets) · deevybee · Storify.

Apparently NSF Grant Applicants Still Allergic To Broader Impacts

Pasco Phronesis

The Consortium of Social Science Associations held its Annual Colloquium on Social And Behavioral Sciences and Public Policy earlier this week.  Amongst the speakers was Acting National Science Foundation (NSF) Director Cora Marrett.* As part of her remarks, she addressed how the Foundation was implementing the Coburn Amendment, which added additional criteria to funding political science research projects through NSF.

The first batch of reviews subject to these new requirements tookplace in early 2013.  In addition to the usual criteria of intellectual merit and broader impacts, the reviewers looked at the ‘most meritorious’ proposals and examined how they contribute to economic development and/or national security.  For the reviews scheduled for early 2014, all three ‘criteria’ will be reviewed at once.

Since researchers don’t like to be told what to do, they aren’t happy.  But Marrett asserts through her remarks that this additional review will not really affect the…

View original post 183 more words

PLOS Biology: Expert Failure: Re-evaluating Research Assessment

Do what you can today; help disrupt and redesign the scientific norms around how we assess, search, and filter science.

via PLOS Biology: Expert Failure: Re-evaluating Research Assessment.

You know, I’m generally in favor of this idea — at least of the idea that we ought to redesign our assessment of research (science in the broad sense). But, as one might expect when speaking of design, the devil is in the details. It would be disastrous, for instance, to throw the baby of peer review out with the bathwater of bias.

I touch on the issue of bias in peer review in this article (coauthored with Steven Hrotic). I suggest that attacks on peer review are attacks on one of the biggest safeguards of academic autonomy here (coauthored with Robert Frodeman). On the relation between peer review and the values of autonomy and accountability, see: J. Britt Holbrook (2010). “Peer Review,” in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press: 321-32 and J. Britt Holbrook (2012). “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan. Beijing: People’s Publishing House: 328 – 62. 

Funny Stuff — but also Serious — from Michael Eisen on the Science OA Sting

This post really starts off well:

My sting exposed the seedy underside of “subscription-based” scholarly publishing, where some journals routinely lower their standards – in this case by sending the paper to reviewers they knew would be sympathetic – in order to pump up their impact factor and increase subscription revenue. Maybe there are journals out there who do subscription-based publishing right – but my experience should serve as a warning to people thinking about submitting their work to Science and other journals like it.  – See more at: http://www.michaeleisen.org/blog/?p=1439#sthash.7amnYjlK.dpuf

But I question what Eisen suggests is the take home lesson of the Science sting:

But the real story is that a fair number of journals who actually carried out peer review still accepted the paper, and the lesson people should take home from this story not that open access is bad, but that peer review is a joke. – See more at: http://www.michaeleisen.org/blog/?p=1439#sthash.7amnYjlK.dpuf

I think that message is even more dangerous than the claim that open access journals are inherently lower quality than traditional journals.

Blue skies, impacts, and peer review | RT. A Journal on Research Policy and Evaluation

This paper describes the results of a survey regarding the incorporation of societal impacts considerations into the peer review of grant proposals submitted to public science funding bodies. The survey investigated perceptions regarding the use of scientific peers to judge not only the intrinsic scientific value of proposed research, but also its instrumental value to society. Members of the scientific community have expressed – some more stridently than others – resistance to the use of such societal impact considerations. We sought to understand why. Results of the survey suggest that such resistance may be due to a lack of desire rather than a lack of confidence where judging impacts is concerned. In other words, it may be less that scientists feel unable to judge broader societal impacts and more that they are unwilling to do so.

Blue skies, impacts, and peer review | Holbrook | RT. A Journal on Research Policy and Evaluation.