Phylopic Phryday Photo

Phylopic Phryday Photo

Pycnogonida by Roule Jammes (vectorized by T. Michael Keesey)

What Student Researchers Don’t Know

This article on Inside Higher Ed is well worth a read for anyone interested in actually incorporating research into teaching (rather than just delivering some content for students to absorb — or not).

Tacit Knowledge and the Student Researcher | Inside Higher Ed.

Experimenting with RebelMouse

I just created this site last week, and it looks great: J Britt Holbrook.

I’m also trying to track the tweets on the SciTS conference this week. It seems like Rebel Mouse should do that (a kind of pictorial, automatic storification) — but so far, nothing has shown up on my Rebel Mouse page. Will check back with updates.

Meanwhile, would love to know what you think of the site.

 

PhyloPic Phryday Photo

PhyloPic Phryday Photo

Pongini by Gareth Monger

Snowflake Indicators | Postmodern Research Evaluation | Part 5 of ?

No two snowflakes are alike. No two people are the same.

                                                                                                   — Horshack

Image

                                                                Snowflakes by Juliancolton2 on flickr

Earlier posts in this series attempted to lay out the ways in which Snowball Metrics present as a totalizing grand narrative of research evaluation. Along with attempting to establish a “recipe” that anyone can follow — or that everyone must follow? — in order to evaluate research, this grand narrative appeals to the fact that it is based on a consensus in order to indicate that it is actually fair.

The contrast is between ‘us’ deciding on such a recipe ourselves or having such a recipe imposed on ‘us’ from the outside. ‘We’ decided on the Snowball Metrics recipe based on a consultative method. Everything is on the up and up. Something similar seems to be in the works regarding the use of altmetrics. Personally, I have my doubts about the advisability of standardizing altmetrics.

— But what’s the alternative to using a consultative method to arrive at agreed upon standards for measuring research impact? I mean, it’s either that, or anarchy, or imposition from outside — right?! We don’t want to have standards imposed on us, and we can’t allow anarchy, so ….

Yes, yes, QED. I get it — really, I do. And I don’t have a problem with getting together to talk about things. But must that conversation be methodized? And do we have to reach a consensus?

— Without consensus, it’ll be anarchy!

I don’t think so. I think there’s another alternative we’re not considering. And no, it’s not imposition of standards on us from the ‘outside’ that I’m advocating, either. I think there’s a fourth alternative.

SNOWFLAKE INDICATORS

In contrast to Snowball Metrics, Snowflake Indicators are a delicate combination of science and art (as is cooking, for that matter — something that ought not necessarily involve following a recipe, either! Just a hint for some of those chefs in The Scholarly Kitchen, which sometimes has a tendency to resemble America’s Test Kitchen — a show I watch, along with others, but not so I can simply follow the recipes.). Snowflake Indicators also respect individuality. The point is not to mash the snowflakes together — following the 6-step recipe, of course — to form the perfect snowball. Instead, the point is to let the individual researcher appear as such. In this sense, Snowflake Indicators provide answers to the question of researcher identity. ORCID gets this point, I think.

To say that Snowflake Indicators answer the question of researcher identity is not to suggest that researchers ought to be seen as isolated individuals, however. Who we are is revealed in communication with each other. I really like that Andy Miah’s CV includes a section that lists places in which his work is cited as “an indication of my peer community.” This would count as a Snowflake Indicator.

Altmetrics might also do the trick, depending on how they’re used. Personally, I find it useful to see who is paying attention to what I write or say. The sort of information provided by Altmetric.com at the article level is great. It gives some indication of the buzz surrounding an article, and provides another sort of indicator of one’s peer community. That helps an individual researcher learn more about her audience — something that helps communication, and thus helps a researcher establish her identity. Being able to use ImpactStory.org to craft a narrative of one’s impact — and it’s especially useful not to be tied down to a DOI sometimes — is also incredibly revealing. Used by an individual researcher to craft a narrative of her research, altmetrics also count as Snowflake Indicators.

So, what distinguishes a Snowflake Indicator from a Snowball Metric? It’s tempting to say that it’s the level of measurement. Snowball Metrics are intended for evaluation at a department or university-wide level, or perhaps even at a higher level of aggregation, rather than for the evaluation of individual researchers. Snowflake Indicators, at least in the way I’ve described them above, seem to be aimed at the level of the individual researcher, or even at individual articles. I think there’s something to that, though I also think it might be possible to aggregate Snowflake Indicators in ways that respect idiosyncrasies but that would still allow for meaningful evaluation (more on that in a future post — but for a hint, contrast this advice on making snowballs, where humor and fun make a real difference, with the 6-step process linked above).

But I think that difference in scale misses the really important difference. Where Snowball Metrics aim to make us all comparable, Snowflake Indicators aim to point out the ways in which we are unique — or at least special. Research evaluation, in part, should be about making researchers aware of their own impacts. Research evaluation shouldn’t be punitive, it should be instructive — or at least an opportunity to learn. Research evaluation shouldn’t so much seek to steer research as it should empower researchers to drive their research along the road to impact. Although everyone likes big changes (as long as they’re positive), local impacts should be valued as world-changing, too. Diversity of approaches should also be valued. Any approach to research evaluation that insists we all need to do the same thing is way off track, in my opinion.

I apologize to anyone who was expecting a slick account that lays out the recipe for Snowflake Indicators. I’m not trying to establish rules here. Nor am I insisting that anything goes (there are no rules).  If anything, I am engaged in rule-seeking — something as difficult to grasp and hold on to as a snowflake.

NISO to Develop Standards and Recommended Practices for Altmetrics – National Information Standards Organization

Can we talk about this? Or if I suggest standards are a double-edged sword, will no one listen?

“For altmetrics to move out of its current pilot and proof-of-concept phase, the community must begin coalescing around a suite of commonly understood definitions, calculations, and data sharing practices,” states Todd Carpenter, NISO Executive Director. “Organizations and researchers wanting to apply these metrics need to adequately understand them, ensure their consistent application and meaning across the community, and have methods for auditing their accuracy. We must agree on what gets measured, what the criteria are for assessing the quality of the measures, at what granularity these metrics are compiled and analyzed, how long a period the altmetrics should cover, the role of social media in altmetrics, the technical infrastructure necessary to exchange this data, and which new altmetrics will prove most valuable. The creation of altmetrics standards and best practices will facilitate the community trust in altmetrics, which will be a requirement for any broad-based acceptance, and will ensure that these altmetrics can be accurately compared and exchanged across publishers and platforms.”

“Sensible, community-informed, discipline-sensitive standards and practices are essential if altmetrics are to play a serious role in the evaluation of research,” says Joshua M. Greenberg, Director of the Alfred P. Sloan Foundation’s Digital Information Technology program. “With its long history of crafting just such standards, NISO is uniquely positioned to help take altmetrics to the next level.”

NISO to Develop Standards and Recommended Practices for Altmetrics – National Information Standards Organization.

The post on Snowflake Indicators is coming …

New record: 66 journals banned for boosting impact factor with self-citations : Nature News Blog

More on the Journal Impact Factor from Richard Van Noorden.

Since the journal’s publisher, PLoS, is a signatory of DORA, it probably does not mind [the fall of its Journal Impact Factor].

via New record: 66 journals banned for boosting impact factor with self-citations : Nature News Blog.

My earlier post on DORA is also relevant.