On Rubrics

Faculty Development

This semester I’m attending a series of Faculty Development Workshops at NJIT designed to assist new faculty with such essentials as teaching, grant writing, publishing, and tenure & promotion.

I’m posting here now in hopes of getting some feedback on a couple of rubrics I developed after attending the second such workshop.

I’m having students give group presentations in my course on Sports, Technology, and Society, and I was searching for ways to help ensure that all members contributed to the group presentation, as well as to differentiate among varying degrees of contribution. Last Tuesday’s workshop focused on assessment, with some treatment of the use of rubrics for both formative and summative assessment. I did a bit more research on my own, and here’s what I’ve come up with.

First, I developed a two-pronged approach. I want to be able to grade the presentation as a whole, as well as each individual’s contribution to that presentation. I decided to make the group presentation grade worth 60% and the individual contribution grade worth 40% of the overall presentation grade.

Second, I developed the group presentation rubric. For this, I owe a debt to several of the rubrics posted by the Eberly Center at Carnegie Mellon University. I found the rubrics for the philosophy paper and the oral presentation particularly helpful. I am thinking about using this rubric both for formative evaluation (to show the students what I expect), as well as for summative evaluation (actually grading the presentations).

Third, I developed the individual peer assessment rubric. I would actually have the students anonymously fill out one of these for each of their fellow group members. For this rubric, I found a publication from the University of New South Wales to be quite helpful (especially Table 2).

I’d be quite interested in constructive feedback on this approach.

Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy

The tracking of the use of research has become central to the measurement of research impact. While historically this tracking has meant using citations to published papers, the results are old, biased, and inaccessible – and stakeholders need current data to make funding decisions. We can do much better. Today’s users of research interact with that research online. This leaves an unprecedented data trail that can provide detailed data on the attention that specific research outputs, institutions, or domains receive.

However, while the promise of real time information is tantalizing, the collection of this data is outstripping our knowledge of how best to use it, our understanding of its utility across differing research domains and our ability to address the privacy and confidentiality issues. This is particularly true in the field of Humanities and Social Sciences, which have historically been under represented in the collection of scientific corpora of citations, and which are now under represented by the tools and analysis approaches being developed to track the use and attention received by STM research outputs.

We will convene a meeting that combines a discussion of the state of the art in one way in which research impact can be measured – Article Level and Altmetrics – with a critical analysis of current gaps and identification of ways to address them in the context of Humanities and Social Sciences.

Modernising Research Monitoring in Europe | Center for the Science of Science & Innovation Policy.

Reflections on the 2014 Carey Lecture at the AAAS Forum on S&T Policy

Cherry A. Murray delivered the Carey Lecture last night at this year’s AAAS Forum on S&T Policy. I want to address one aspect of her talk here — the question of transdisciplinarity (TD, which I will also use for the adjective ‘transdisciplinary’) and its necessity to address the ‘big’ questions facing us.

As far as I could tell, Murray was working with her own definitions of disciplinary (D), multidisciplinary (MD), interdisciplinary (ID), and TD. In brief, according to Murray, D refers to single-discipline approaches to a problem, ID refers to two disciplines working together on the same problem, MD refers to more than two disciplines focused on the same problem from their own disciplinary perspectives, and TD refers to more than two disciplines working together on the same problem. Murray also used the term cross-disciplinary, which she did not define (to my recollection).

All these definitions are cogent. But do we really need a different term for two disciplines working on a problem together (ID) and more than two disciplines working on a problem together (TD)? Wouldn’t it be simpler just to use ID for more than one discipline?

I grant that there is no universally agreed upon definition of these terms (D, MD, ID, and TD). But basically no one who writes about these issues uses the definitions Murray proposed. And there is something like a rough consensus on what these terms mean, despite the lack of universal agreement. I discuss this consensus, and what these definitions mean for the issue of communication (and, by extension, cooperation) between and among disciplines here:10.1007/s11229-012-0179-7

I tend to agree that TD is a better approach to solving complex problems. But in saying this, I mean more than involving more than two disciplines. I mean involving non-academic, and hence non-disciplinary, actors in the process. It’s actually closer to the sort of design thinking that Bob Schwartz discussed in the second Science + Art session yesterday afternoon.

One might ask whether this discussion of terms is a distraction from Murray’s main point — that we need to think about solutions to the ‘big problems’ we face. I concede the point. But that is all the more reason to get our terms right, or at least to co-construct a new language for talking about what sort of cooperation is needed. There is a literature out there on ID/TD, and Murray failed to engage it. To point out that failure is not to make a disciplinary criticism of Murray (as if there might be a discipline of ID/TD, a topic I discuss here). It is to suggest, however, that inventing new terms on one’s own is not conducive to the sort of communication necessary to tackle the ‘big’ questions.

Altmetric.com Tracking Mentions On Sina Weibo | STM Publishing

Wow.

We would like to announce that Altmetric have begun tracking mentions of academic articles on Chinese microblogging site Sina Weibo, and the data will shortly be fully integrated into existing Altmetric tools.

The mentions collated will be visible to users via the Altmetric Explorer, a web-based application that allows users to browse the online mentions of any academic article, and, where appropriately licensed, via the article metrics data on publisher platforms.

Launched in 2009, Sina Weibo has become one of the largest social media sites in China, and is most often likened to Twitter. Integrating this data means that Altmetric users will now be able to see a much more global view of the attention an article has received. Altmetric is currently the only article level metrics provider to offer this data.

via Altmetric Begin Tracking Mentions Of Articles On Sina Weibo | STM Publishing.

Measuring the Impacts of Science | AAAS Forum on Science and Technology Policy

I’m looking forward to moderating a panel on day 1 of the AAAS Forum on Science and Technology Policy.

2:00 Current Issues in S&T Policy (Breakout Sessions) 
 
(A) Measuring the Impacts of Science   
• What are the policy relevant challenges, tools, and approaches to measuring the social impact of scientific research? • How can improved indicators capture change in science, technology, and innovation? • Are altmetrics the solution to measuring social impacts? 
  
Moderator: J. Britt Holbrook, Visiting Assistant Professor, School of Public Policy, Georgia Institute of Technology; and Member, AAAS Committee on Scientific Freedom and Responsibility
 
Kaye Husbands Fealing, Professor, Center for Science, Technology and Environmental Policy, Humphrey School of Public Affairs, University of Minnesota; Senior Study Director, National Academy of Sciences, Committee on National Statistics; and Member, AAAS Committee on Science, Engineering, and Public Policy
 
Gil Omenn, Director, Center for Computational Medicine and Bioinformatics, University of Michigan
 
Mike Taylor, Research Specialist, Elsevier Labs

 

Publishers withdraw more than 120 gibberish papers : Nature News & Comment

Publishers withdraw more than 120 gibberish papers : Nature News & Comment.

Thanks to one of my students — Addison Amiri — for pointing out this piece by @Richvn.

How journals like Nature, Cell and Science are damaging science | Randy Schekman | Comment is free | The Guardian

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called \”impact factor\” – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.

via How journals like Nature, Cell and Science are damaging science | Randy Schekman | Comment is free | The Guardian.

Thanks to my colleague Diana Hicks for pointing this out to me.

The last line of the quotation strikes me as the most interesting point, one that deserves further development. The steering effect of metrics is well known (Weingart 2005). There’s growing resistance to the Journal Impact Factor. Although the persuasive comparison between researchers and bankers is itself over the top, the last line suggests — at least to me — a better way to critique the reliance on the Journal Impact Factor, as well as other attempts to measure research. It’s a sort of reverse Kant with an Illichian flavor, which I will formulate as a principle here, provided that everyone promises to keep in mind my attitude toward principles.

Here is one formulation of the principle: Measure researchers only in ways that recognize them as autonomous agents, never merely as means to other ends.

Here is another: Never treat measures as ends in themselves.

Once measures, which are instruments to the core, take on a life of their own, we have crossed the line that Illich calls the second watershed. That the Journal Impact Factor has in fact crossed that line is the claim made in the quote, above, though not using Illich’s language. The question we should be asking is how researchers can manage measures, rather than how we can measure researchers in order to manage them.
_______________________________________________________

Peter Weingart. Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics Vol. 62, No. 1 (2005) 117-131.