Pycnogonida by Roule Jammes (vectorized by T. Michael Keesey)
33.210950
-97.146959
This article on Inside Higher Ed is well worth a read for anyone interested in actually incorporating research into teaching (rather than just delivering some content for students to absorb — or not).
Tacit Knowledge and the Student Researcher | Inside Higher Ed.
I just created this site last week, and it looks great: J Britt Holbrook.
I’m also trying to track the tweets on the SciTS conference this week. It seems like Rebel Mouse should do that (a kind of pictorial, automatic storification) — but so far, nothing has shown up on my Rebel Mouse page. Will check back with updates.
Meanwhile, would love to know what you think of the site.
No two snowflakes are alike. No two people are the same.
— Horshack
Snowflakes by Juliancolton2 on flickr
Earlier posts in this series attempted to lay out the ways in which Snowball Metrics present as a totalizing grand narrative of research evaluation. Along with attempting to establish a “recipe” that anyone can follow — or that everyone must follow? — in order to evaluate research, this grand narrative appeals to the fact that it is based on a consensus in order to indicate that it is actually fair.
The contrast is between ‘us’ deciding on such a recipe ourselves or having such a recipe imposed on ‘us’ from the outside. ‘We’ decided on the Snowball Metrics recipe based on a consultative method. Everything is on the up and up. Something similar seems to be in the works regarding the use of altmetrics. Personally, I have my doubts about the advisability of standardizing altmetrics.
— But what’s the alternative to using a consultative method to arrive at agreed upon standards for measuring research impact? I mean, it’s either that, or anarchy, or imposition from outside — right?! We don’t want to have standards imposed on us, and we can’t allow anarchy, so ….
Yes, yes, QED. I get it — really, I do. And I don’t have a problem with getting together to talk about things. But must that conversation be methodized? And do we have to reach a consensus?
— Without consensus, it’ll be anarchy!
I don’t think so. I think there’s another alternative we’re not considering. And no, it’s not imposition of standards on us from the ‘outside’ that I’m advocating, either. I think there’s a fourth alternative.
SNOWFLAKE INDICATORS
In contrast to Snowball Metrics, Snowflake Indicators are a delicate combination of science and art (as is cooking, for that matter — something that ought not necessarily involve following a recipe, either! Just a hint for some of those chefs in The Scholarly Kitchen, which sometimes has a tendency to resemble America’s Test Kitchen — a show I watch, along with others, but not so I can simply follow the recipes.). Snowflake Indicators also respect individuality. The point is not to mash the snowflakes together — following the 6-step recipe, of course — to form the perfect snowball. Instead, the point is to let the individual researcher appear as such. In this sense, Snowflake Indicators provide answers to the question of researcher identity. ORCID gets this point, I think.
To say that Snowflake Indicators answer the question of researcher identity is not to suggest that researchers ought to be seen as isolated individuals, however. Who we are is revealed in communication with each other. I really like that Andy Miah’s CV includes a section that lists places in which his work is cited as “an indication of my peer community.” This would count as a Snowflake Indicator.
Altmetrics might also do the trick, depending on how they’re used. Personally, I find it useful to see who is paying attention to what I write or say. The sort of information provided by Altmetric.com at the article level is great. It gives some indication of the buzz surrounding an article, and provides another sort of indicator of one’s peer community. That helps an individual researcher learn more about her audience — something that helps communication, and thus helps a researcher establish her identity. Being able to use ImpactStory.org to craft a narrative of one’s impact — and it’s especially useful not to be tied down to a DOI sometimes — is also incredibly revealing. Used by an individual researcher to craft a narrative of her research, altmetrics also count as Snowflake Indicators.
So, what distinguishes a Snowflake Indicator from a Snowball Metric? It’s tempting to say that it’s the level of measurement. Snowball Metrics are intended for evaluation at a department or university-wide level, or perhaps even at a higher level of aggregation, rather than for the evaluation of individual researchers. Snowflake Indicators, at least in the way I’ve described them above, seem to be aimed at the level of the individual researcher, or even at individual articles. I think there’s something to that, though I also think it might be possible to aggregate Snowflake Indicators in ways that respect idiosyncrasies but that would still allow for meaningful evaluation (more on that in a future post — but for a hint, contrast this advice on making snowballs, where humor and fun make a real difference, with the 6-step process linked above).
But I think that difference in scale misses the really important difference. Where Snowball Metrics aim to make us all comparable, Snowflake Indicators aim to point out the ways in which we are unique — or at least special. Research evaluation, in part, should be about making researchers aware of their own impacts. Research evaluation shouldn’t be punitive, it should be instructive — or at least an opportunity to learn. Research evaluation shouldn’t so much seek to steer research as it should empower researchers to drive their research along the road to impact. Although everyone likes big changes (as long as they’re positive), local impacts should be valued as world-changing, too. Diversity of approaches should also be valued. Any approach to research evaluation that insists we all need to do the same thing is way off track, in my opinion.
I apologize to anyone who was expecting a slick account that lays out the recipe for Snowflake Indicators. I’m not trying to establish rules here. Nor am I insisting that anything goes (there are no rules). If anything, I am engaged in rule-seeking — something as difficult to grasp and hold on to as a snowflake.
Can we talk about this? Or if I suggest standards are a double-edged sword, will no one listen?
“For altmetrics to move out of its current pilot and proof-of-concept phase, the community must begin coalescing around a suite of commonly understood definitions, calculations, and data sharing practices,” states Todd Carpenter, NISO Executive Director. “Organizations and researchers wanting to apply these metrics need to adequately understand them, ensure their consistent application and meaning across the community, and have methods for auditing their accuracy. We must agree on what gets measured, what the criteria are for assessing the quality of the measures, at what granularity these metrics are compiled and analyzed, how long a period the altmetrics should cover, the role of social media in altmetrics, the technical infrastructure necessary to exchange this data, and which new altmetrics will prove most valuable. The creation of altmetrics standards and best practices will facilitate the community trust in altmetrics, which will be a requirement for any broad-based acceptance, and will ensure that these altmetrics can be accurately compared and exchanged across publishers and platforms.”
“Sensible, community-informed, discipline-sensitive standards and practices are essential if altmetrics are to play a serious role in the evaluation of research,” says Joshua M. Greenberg, Director of the Alfred P. Sloan Foundation’s Digital Information Technology program. “With its long history of crafting just such standards, NISO is uniquely positioned to help take altmetrics to the next level.”
The post on Snowflake Indicators is coming …
More on the Journal Impact Factor from Richard Van Noorden.
Since the journal’s publisher, PLoS, is a signatory of DORA, it probably does not mind [the fall of its Journal Impact Factor].
via New record: 66 journals banned for boosting impact factor with self-citations : Nature News Blog.
My earlier post on DORA is also relevant.
This should be read along with Paul Wouter’s post (below). Lots of confusion surrounding Journal Impact Factor, I think.
With the release of the new Journal Impact Factors, everyone should read this blog posted by Paul Wouters at “The Citation Culture.”
The San Francisco Declaration on Research Assessment (DORA), see our most recent blogpost, focuses on the Journal Impact Factor, published in the Web of Science by Thomson Reuters. It is a strong plea to base research assessments of individual researchers, research groups and submitted grant proposals not on journal metrics but on article-based metrics combined with peer review. DORA cites a few scientometric studies to bolster this argument. So what is the evidence we have about the JIF?
In the 1990s, the Norwegian researcher Per Seglen, based at our sister institute the Institute for Studies in Higher Education and Research (NIFU) in Oslo and a number of CWTS researchers (in particular Henk Moed and Thed van Leeuwen) developed a systematic critique of the JIF, its validity as well as the way it is calculated (Moed & Van Leeuwen, 1996; Moed & Leeuwen, 1995; Seglen, 1997). This line of research…
View original post 1,366 more words
New, Open Access article just published.
Authors: Wolf, Birge; Lindenthal, Thomas; Szerencsits, Manfred; Holbrook, J. Britt; Heß, Jürgen
Source: GAIA – Ecological Perspectives for Science and Society, Volume 22, Number 2, June 2013 , pp. 104-114(11)
Abstract:
Currently, established research evaluation focuses on scientific impact – that is, the impact of research on science itself. We discuss extending research evaluation to cover productive interactions and the impact of research on practice and society. The results are based on interviews with scientists from (organic) agriculture and a review of the literature on broader/social/societal impact assessment and the evaluation of interdisciplinary and transdisciplinary research. There is broad agreement about what activities and impacts of research are relevant for such an evaluation. However, the extension of research evaluation is hampered by a lack of easily usable data. To reduce the effort involved in data collection, the usability of existing documentation procedures (e.g., proposals and reports for research funding) needs to be increased. We propose a structured database for the evaluation of scientists, projects, programmes and institutions, one that will require little additional effort beyond existing reporting require ments.
Keywords: DATA ASSESSMENT; DOCUMENTATION; INTERDISCIPLINARITY; ORGANIC AGRICUL TURE; PRACTICE; PRODUCTIVE INTERACTIONS; RESEARCH EVALUATION; SOCIAL/SOCIETAL IMPACT; SUSTAINABILITY; TRANSDISCIPLINARITY
Document Type: Research article
Publication date: 2013-06-01
A community blog and repository of resources for improving research impact on complex real-world problems
Science | Policy | Advice | Engagement
Only the unmeasured is free.
Tracking retractions as a window into the scientific process
Media, Politics, Reform
Explorations in contemplative writing
When @richvn feels like it
From Bauhaus to Beinhaus
SV-POW! ... All sauropod vertebrae, except when we're talking about Open Access
Home for research news from my lab and posts about related science.
research education, academic writing, public engagement, funding, other eccentricities.
Paul Wouters and Sarah de Rijcke @ CWTS
technology thinking for teaching and research
Something always escapes!
Exploring Science, Explaining Evolution, Exposing Creationism
out of the archives and into the streets
Exploring Knowledge as a Social Phenomenon