Camp Engineering Education AfterNext

This looks like fun!

Tools for Serendipity: SHERPA/RoMEO

I really want to post a pre-print of my recently published article in the Journal of Responsible Innovation: “Designing Responsible Research and Innovation as a tool to encourage serendipity could enhance the broader societal impacts of research.” Here’s a link to the published version. One thing about this article that would be obvious if one were to compare the pre-print to the final published version is just how much the latter was improved by peer review and input from the journal editor.

Since I still don’t have an institutional repository at NJIT, I could post it at Humanities Commons. Before I do that, I want to make sure I don’t get sideways with Taylor and Francis. So, the prudent thing to do is to check with SHERPA/RoMEO to see what the journal policies are. The problem, however, is that SHERPA/RoMEO hasn’t yet ‘graded’ JRI, so they don’t tell me what the policies are. This is all sort of understandable, since JRI is still a relatively new journal. Searching an older journal put out by the same publisher, Social Epistemology, tells me that I could post both pre-prints and post-prints — that is, my version, but not the actual publisher’s PDF, of the article after it went through peer review — of articles I published there. So, maybe I could go ahead, assuming that Taylor and Francis policy is consistent across all their journals. Instead, I requested that SHERPA/RoMEO grade JRI.

I can wait a while to post the pre-print, and I want to gauge how long it takes to get a grade. I’m also waiting to find out how long it takes for JRI to show up in Scopus (their main ‘about‘ page says they are indexed in Scopus, but it hasn’t shown up in Scopus, yet).  I’ve also been told that NJIT is getting bepress soon.

All of these — Humanities Commons, SHERPA/RoMEO, bepress — are tools for serendipity in the sense in which I outline the term in this article. As soon as I can let everyone see it, I will!

 

 

Ahead of the Curve // John J. Reilly Center // University of Notre Dame

Ahead Of The Curve: Anticipating Ethical, Legal, and Societal Issues Posed by Emerging Weapons Technologies

April 22-23, 2014

University of Notre Dame

“Ahead of the Curve” will provide a forum to discuss the “action-oriented” chapters of the soon-to-be-released National Academy of Science’s report, “Emerging and Readily Available Technologies and National Security.” The report was commissioned by the Defense Advanced Research Projects Agency (DARPA) in order to begin a discussion about the conduct and applications of research on military technology as well as their unforseen and inadvertant consequences. Speakers will include members of the NAS committee that wrote the report, along with distinguished experts on the ethics, law, and social impacts of new weapons technologies and representatives of agencies and organizations that are home to cutting-edge weapons research. Presentations will address the ethical, legal, and societal issues that policy makers, researchers, and industries need to anticipate as new technologies arise, specifically in fields such as robotics, autonomous systems, prosthetics and human enhancement, cyber weapons, information warfare technologies, synthetic biology, and nanotechnology. Our primary goal is to help government agencies, institutions, and researchers grow the expertise necessary for early and continuing engagement with the ethical, legal, and societal implications of new weapons technologies as they are planned and developed. We also aim to generate a broad public audience for the NAS report, this being an area in which public education is necessary, as is elevating the level of factually well-informed, public discourse.

via Ahead of the Curve // John J. Reilly Center // University of Notre Dame.

Publishers withdraw more than 120 gibberish papers : Nature News & Comment

Publishers withdraw more than 120 gibberish papers : Nature News & Comment.

Thanks to one of my students — Addison Amiri — for pointing out this piece by @Richvn.

Lethal Autonomous Robots (“Killer Robots”) | Center for Ethics & Technology | Georgia Institute of Technology | Atlanta, GA

Lethal Autonomous Robots (\”Killer Robots\”)

Monday, 18 November 2013 05:00 pm to 07:00 pm EST

Location: 

Global Learning Center (in Tech Square), room 129

WATCH the simultaneously streamed WEBCAST at: 

http://proed.pe.gatech.edu/gtpe/pelive/tech_debate_111813/

Debate and Q&A for both

Lethal Autonomous Robots (LARs) are machines that can decide to kill. Such a technology has the potential to revolutionize modern warfare and more. The need for understanding LARs is essential to decide whether their development and possible deployment should be regulated or banned. Are LARs ethical?

via Lethal Autonomous Robots ("Killer Robots") | Center for Ethics & Technology | Georgia Institute of Technology | Atlanta, GA.

‘Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders – Wired Campus – The Chronicle of Higher Education

“The ‘big’ there is purely marketing,” Mr. Reed said. “This is all fear … This is about you buying big expensive servers and whatnot.”

via 'Big Data' Is Bunk, Obama Campaign's Tech Guru Tells University Leaders – Wired Campus – The Chronicle of Higher Education.

Also funny what he says about his own education ….

What does it mean to prepare for life in ‘Humanity 2.0’?

Francis Rememdios has organized a session at the 4S Annual Meeting in which he, David Budtz Pedersen, and I will serve as critics of Steve Fuller’s book Preparing for Life in Humanity 2.0. We’ll be live tweeting as much as possible during the session, using the hashtag #humanity2 for those who want to follow. There is also a more general #4s2013 that should be interesting to follow for the next few days.

Here are the abstracts for our talks:

Humanity 2.0, Synthetic Biology, and Risk Assessment

Francis Remedios, Social Epistemology Editorial Board member

As a follow-up to Fuller’s Humanity 2.0, which is concerned with the impact of biosciences and nanosciences on humanity, Preparing for Life in Humanity 2.0 provides a more detailed analysis. Possible futures are discussed are: the ecological, the biomedical and the cybernetic. In the Proactionary Imperative, Fuller and Lipinska aver that for the human condition, the proactionary principle, which is risk taking, is an essential part should be favored over the precautionary principle, which is risk aversion. In terms of policy and ethics, which version of risk assessment should be used for synthetic biology, a branch of biotechnology? With synthetic biology, life is created from inanimate material. Synthetic biology has been dubbed life 2.0. Should one principle be favored over the other?

The Political Epistemology of Humanity 2.0
David Budtz Pedersen, Center for Semiotics, Aarhus University
In this paper I confront Fuller’s conception of Humanity 2.0 with the techno-democratic theories of Fukuyama (2003) and Rawls (1999). What happens to democratic values such as inclusion, rule of law, equality and fairness in an age of technology intensive output-based policymaking? Traditional models of input democracy are based on the moral intuition that the unintended consequences of natural selection are undeserved and call for social redress and compensation. However, in humanity 2.0 these unintended consequences are turned into intended as an effect of bioengineering and biomedical intervention. This, I argue, leads to an erosion of the natural luck paradigm on which  standard theories of distributive justice rest. Hence, people can no longer be expected to recognize each other as natural equals. Now compare this claim to Fuller’s idea that the welfare state needs to reassure the collectivization of burdens and benefits of radical scientific experimentation. Even if this might energize the welfare system and deliver a new momentum to the welfare state in an age of demographic change, it is not clear on which basis this political disposition for collectivizing such scientific benefits rests. In short, it seems implausible that the new techno-elites, that has translated the unintended consequence of natural selection into intended, will be convinced of distributing the benefits of scientific experiments to the wider society. If the biosubstrate of the political elite is radically different in terms of intelligence, life expectancy, bodily performance etc. than those disabled, it is no longer clear what the basis of redistribution and fairness should be. Hence, I argue that important elements of traditional democracy are still robust and necessary to vouch for the legitimacy of humanity 2.0.
Fuller’s Categorical Imperative: The Will to Proaction
J. Britt Holbrook, Georgia Institute of Technology
Two 19th century philosophers – William James and Friedrich Nietzsche – and one on the border of the 18th and 19th centuries – Immanuel Kant – underlie Fuller’s support for the proactionary imperative as a guide to life in ‘Humanity 2.0’. I make reference to the thought of these thinkers (James’s will to believe, Nietzsche’s will to power, and Kant’s categorical imperative) in my critique of Fuller’s will to proaction. First, I argue that, despite a superficial resemblance, James’s view about the risk of uncertainty does not map well onto the proactionary principle. Second, however, I argue that James’s notion that our epistemological preferences reveal something about our ‘passional nature’ connects with Nietzsche’s idea of the will to power in a way that allows us to diagnose Fuller’s ‘moral entrepreneur’ as revelatory of Fuller’s own  ‘categorical imperative’. But my larger critique rests on the connection between Fuller’s thinking and that of Wilhelm von Humboldt. I argue that Fuller accepts not only Humboldt’s ideas about the integration of research and education, but also – and this is the main weakness of Fuller’s position – Humboldt’s lesser recognized thesis about the relation between knowledge and society. Humboldt defends the pursuit of knowledge for its own sake on the grounds that this is necessary to benefit society. I criticize this view and argue that Fuller’s account of the public intellectual as an agent of distributive justice is inadequate to escape the critique of the pursuit of knowledge for its own sake.

Developing Metrics for the Evaluation of Individual Researchers – Should Bibliometricians Be Left to Their Own Devices?

So, I am sorry to have missed most of the Atlanta Conference on Science and Innovation Policy. On the other hand, I wouldn’t trade my involvement with the AAAS Committee on Scientific Freedom and Responsibility for any other academic opportunity. I love the CSFR meetings, and I think we may even be able to make a difference occasionally. I always leave the meetings energized and thinking about what I can do next.

That said, I am really happy to be on my way back to the ATL to participate in the last day of the Atlanta Conference. Ismael Rafols asked me to participate in a roundtable discussion with Cassidy Sugimoto and him (to be chaired by Diana Hicks). Like I’d say ‘no’ to that invitation!

The topic will be the recent discussions among bibliometricians of the development of metrics for individual researchers. That sounds like a great conversation to me! Of course, when I indicated to Ismael that I was bascially against the idea of bibliometricians coming up with standards for individual-level metrics, Ismael laughed and said the conversation should be interesting.

I’m not going to present a paper; just some thoughts. But I did start writing on the plane. Here’s what I have so far:

Bibliometrics are now increasingly being used in ways that go beyond their design. Bibliometricians are now increasingly asking how they should react to such unintended uses of the tools they developed. The issue of unintended consequences – especially of technologies designed with one purpose in mind, but which can be repurposed – is not new, of course. And bibliometricians have been asking questions – ethical questions, but also policy questions – essentially since the beginning of the development of bibliometrics. If anyone is sensitive to the fact that numbers are not neutral, it is surely the bibliometricians.

This sensitivity to numbers, however, especially when combined with great technical skill and large data sets, can also be a weakness. Bibliometricians are also aware of this phenomenon, though perhaps to a lesser degree than one might like. There are exceptions. The discussion by Paul Wouters, Wolfgang Glänzel, Jochen Gläser, and Ismael Rafols regarding this “urgent debate in bibliometrics,” is one indication of such awareness. Recent sessions at ISSI in Vienna and STI2013 in Berlin on which Wouters et al. report are other indicators that the bibliometrics community feels a sense of urgency, especially with regard to the question of measuring the performance of individual researchers.

That such questions are being raised and discussed by bibliometricians is certainly a positive development. One cannot fault bibliometricians for wanting to take responsibility for the unintended consequences of their own inventions. But one – I would prefer to say ‘we’ – cannot allow this responsibility to be assumed only by members of the bibliometrics community.

It’s not so much that I don’t want to blame them for not having thought through possible other uses of their metrics — holding them to what Carl Mitcham calls a duty plus respicare: to take more into account than the purpose for which something was initially designed. It’s that I don’t want to leave it to them to try to fix things. Bibliometricians, after all, are a disciplinary community. They have standards; but I worry they also think their standards ought to be the standards. That’s the same sort of naivety that got us in this mess in the first place.

Look, if you’re going to invent a device someone else can command (deans and provosts with research evaluation metrics are like teenagers driving cars), you ought at least to have thought about how those others might use it in ways you didn’t intend. But since you didn’t, don’t try to come in now with your standards as if you know best.

Bibliometrics are not the province of bibliometricians anymore. They’re part of academe. And we academics need to take ownership of them. We shouldn’t let administrators drive in our neighborhoods without some sort of oversight. We should learn to drive ourselves so we can determine the rules of the road. If the bibliometricians want to help, that’s cool. But I am not going to let the Fordists figure out academe for me.

With the development of individual level bibliometrics, we now have the ability — and the interest — to own our own metrics. What we want to avoid at all costs is having metrics take over our world so that they end up steering us rather than us driving them. We don’t want what’s happened with the car to happen with bibliometrics. What we want is to stop at the level at which bibliometrics of individual researchers maximize the power and creativity of individual researchers. Once we standardize metrics, it makes it that much easier to institutionalize them.

It’s not metrics themselves that we academics should resist. ‘Impact’ is a great opportunity, if we own it. But by all means, we should resist the institutionalization of standardized metrics. A first step is to resist their standardization.

Coming soon …

Image

– Featuring nearly 200 entirely new entries

– All entries revised and updated

– Plus expanded coverage of engineering topics and global perspectives

– Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents

Andy Stirling on why the precautionary principle matters | Science | guardian.co.uk

SPRU Professor Andy Stirling is beginning a series in The Guardian on the precautionary principle. Stirling’s first article paints an optimistic picture:

Far from the pessimistic caricature, precaution actually celebrates the full depth and potential for human agency in knowledge and innovation. Blinkered risk assessment ignores both positive and negative implications of uncertainty. Though politically inconvenient for some, precaution simply acknowledges this scope and choice. So, while mistaken rhetorical rejections of precaution add further poison to current political tensions around technology, precaution itself offers an antidote – one that is in the best traditions of rationality. By upholding both scientific rigour and democratic accountability under uncertainty, precaution offers a means to help reconcile these increasingly sundered Enlightenment cultures.

via Why the precautionary principle matters | Andy Stirling | Science | guardian.co.uk.

Stirling’s work on the precautionary principle is some of the best out there, and Adam Briggle and I cite him in our working paper on the topic. I look forward to reading the rest of Stirling’s series. Although I’m a critic of the Enlightenment, I don’t reject it wholesale. In fact, I think rational engagement with the thinkers of the Enlightenment — and some of its most interesting heirs, including Stirling and Steve Fuller, who’s a proponent of proaction over precaution — is important. So, stay tuned for more!