Where are the senior women in STEM? | Dawn Bazely

So, here\’s the thing: I\’m a female Biology professor, and when I was an undergraduate (1977-81 UofT), there were more or less 50:50 male to female students in my classes. This bottom-up input of women into Biology has been happening for decades. So, thirty years on, where are the other female Full Professors? In fact, where are the senior women in the government, industry and even in Biology-related NGOs?

via Where are the senior women in STEM? | Dawn Bazely.

Lethal Autonomous Robots (“Killer Robots”) | Center for Ethics & Technology | Georgia Institute of Technology | Atlanta, GA

Lethal Autonomous Robots (\”Killer Robots\”)

Monday, 18 November 2013 05:00 pm to 07:00 pm EST

Location: 

Global Learning Center (in Tech Square), room 129

WATCH the simultaneously streamed WEBCAST at: 

http://proed.pe.gatech.edu/gtpe/pelive/tech_debate_111813/

Debate and Q&A for both

Lethal Autonomous Robots (LARs) are machines that can decide to kill. Such a technology has the potential to revolutionize modern warfare and more. The need for understanding LARs is essential to decide whether their development and possible deployment should be regulated or banned. Are LARs ethical?

via Lethal Autonomous Robots ("Killer Robots") | Center for Ethics & Technology | Georgia Institute of Technology | Atlanta, GA.

What does it mean to prepare for life in ‘Humanity 2.0’?

Francis Rememdios has organized a session at the 4S Annual Meeting in which he, David Budtz Pedersen, and I will serve as critics of Steve Fuller’s book Preparing for Life in Humanity 2.0. We’ll be live tweeting as much as possible during the session, using the hashtag #humanity2 for those who want to follow. There is also a more general #4s2013 that should be interesting to follow for the next few days.

Here are the abstracts for our talks:

Humanity 2.0, Synthetic Biology, and Risk Assessment

Francis Remedios, Social Epistemology Editorial Board member

As a follow-up to Fuller’s Humanity 2.0, which is concerned with the impact of biosciences and nanosciences on humanity, Preparing for Life in Humanity 2.0 provides a more detailed analysis. Possible futures are discussed are: the ecological, the biomedical and the cybernetic. In the Proactionary Imperative, Fuller and Lipinska aver that for the human condition, the proactionary principle, which is risk taking, is an essential part should be favored over the precautionary principle, which is risk aversion. In terms of policy and ethics, which version of risk assessment should be used for synthetic biology, a branch of biotechnology? With synthetic biology, life is created from inanimate material. Synthetic biology has been dubbed life 2.0. Should one principle be favored over the other?

The Political Epistemology of Humanity 2.0
David Budtz Pedersen, Center for Semiotics, Aarhus University
In this paper I confront Fuller’s conception of Humanity 2.0 with the techno-democratic theories of Fukuyama (2003) and Rawls (1999). What happens to democratic values such as inclusion, rule of law, equality and fairness in an age of technology intensive output-based policymaking? Traditional models of input democracy are based on the moral intuition that the unintended consequences of natural selection are undeserved and call for social redress and compensation. However, in humanity 2.0 these unintended consequences are turned into intended as an effect of bioengineering and biomedical intervention. This, I argue, leads to an erosion of the natural luck paradigm on which  standard theories of distributive justice rest. Hence, people can no longer be expected to recognize each other as natural equals. Now compare this claim to Fuller’s idea that the welfare state needs to reassure the collectivization of burdens and benefits of radical scientific experimentation. Even if this might energize the welfare system and deliver a new momentum to the welfare state in an age of demographic change, it is not clear on which basis this political disposition for collectivizing such scientific benefits rests. In short, it seems implausible that the new techno-elites, that has translated the unintended consequence of natural selection into intended, will be convinced of distributing the benefits of scientific experiments to the wider society. If the biosubstrate of the political elite is radically different in terms of intelligence, life expectancy, bodily performance etc. than those disabled, it is no longer clear what the basis of redistribution and fairness should be. Hence, I argue that important elements of traditional democracy are still robust and necessary to vouch for the legitimacy of humanity 2.0.
Fuller’s Categorical Imperative: The Will to Proaction
J. Britt Holbrook, Georgia Institute of Technology
Two 19th century philosophers – William James and Friedrich Nietzsche – and one on the border of the 18th and 19th centuries – Immanuel Kant – underlie Fuller’s support for the proactionary imperative as a guide to life in ‘Humanity 2.0’. I make reference to the thought of these thinkers (James’s will to believe, Nietzsche’s will to power, and Kant’s categorical imperative) in my critique of Fuller’s will to proaction. First, I argue that, despite a superficial resemblance, James’s view about the risk of uncertainty does not map well onto the proactionary principle. Second, however, I argue that James’s notion that our epistemological preferences reveal something about our ‘passional nature’ connects with Nietzsche’s idea of the will to power in a way that allows us to diagnose Fuller’s ‘moral entrepreneur’ as revelatory of Fuller’s own  ‘categorical imperative’. But my larger critique rests on the connection between Fuller’s thinking and that of Wilhelm von Humboldt. I argue that Fuller accepts not only Humboldt’s ideas about the integration of research and education, but also – and this is the main weakness of Fuller’s position – Humboldt’s lesser recognized thesis about the relation between knowledge and society. Humboldt defends the pursuit of knowledge for its own sake on the grounds that this is necessary to benefit society. I criticize this view and argue that Fuller’s account of the public intellectual as an agent of distributive justice is inadequate to escape the critique of the pursuit of knowledge for its own sake.

PLOS Biology: Expert Failure: Re-evaluating Research Assessment

Do what you can today; help disrupt and redesign the scientific norms around how we assess, search, and filter science.

via PLOS Biology: Expert Failure: Re-evaluating Research Assessment.

You know, I’m generally in favor of this idea — at least of the idea that we ought to redesign our assessment of research (science in the broad sense). But, as one might expect when speaking of design, the devil is in the details. It would be disastrous, for instance, to throw the baby of peer review out with the bathwater of bias.

I touch on the issue of bias in peer review in this article (coauthored with Steven Hrotic). I suggest that attacks on peer review are attacks on one of the biggest safeguards of academic autonomy here (coauthored with Robert Frodeman). On the relation between peer review and the values of autonomy and accountability, see: J. Britt Holbrook (2010). “Peer Review,” in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press: 321-32 and J. Britt Holbrook (2012). “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan. Beijing: People’s Publishing House: 328 – 62. 

Coming soon …

Image

– Featuring nearly 200 entirely new entries

– All entries revised and updated

– Plus expanded coverage of engineering topics and global perspectives

– Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents

Two Watersheds for Open Access?

This past week I taught “Two Watersheds,” a chapter from Ivan Illich’s Tools for Conviviality. I got some interesting reactions from my students, most of whom are budding engineers. But that’s not what this post is about.

I do want to talk a bit about Illich’s notion of the two watersheds, however. Illich illustrates the idea with reference to medicine. Illich claims that 1913 marks the first watershed in medicine. This is so because in 1913, one finally had a greater than 50% chance that someone educated in medical school (i.e., a doctor) would be able to prescribe an effective treatment for one’s ailment. At that point, modern medicine had caught up with shamans and witch doctors. It rapidly began to outperform them, however. And people became healthier as a result.

By the mid 1950s, however, something changed. Medicine had begun to treat people as patients, and more and more resources were devoted to extending unhealthy life than to helping keep people healthy or to restoring health. Medicine became an institutionalized bureaucracy rather than a calling. Illich picks (admittedly arbitrarily) 1955 to mark this second watershed.

Illich’s account of the two watersheds in medicine is applicable to other technological developments as well.

A couple of weeks ago, Richard Van Noorden published a piece in Nature the headline of which reads “Half of 2011 papers now free to read.” Van Noorden does a good job of laying out the complexities of this claim (‘free’ is not necessarily equivalent to ‘open access’, the robot used to gather the data may not be accurate, and so on), which was made in a report to the European Commission. But the most interesting question raised in the piece is whether the 50% figure represents a “tipping point” for open access.

The report, which was not peer reviewed, calls the 50% figure for 2011 a “tipping point”, a rhetorical flourish that [Peter] Suber is not sure is justified. “The real tipping point is not a number, but whether scientists make open access a habit,” he says.

I’m guessing that Illich might agree both with the report and with Suber’s criticism, but that he might also disagree with both. But let’s not kid ourselves, here. I’m talking more about myself than I am about Illich — just using his idea of the two watersheds to make a point.

The report simply defines the tipping point as more than 50% of papers available for free. This is close enough to the way Illich defines the first watershed in medicine. So, let’s suppose, for the sake of argument, that what the report claims is true. Then we can say that 2011 marks the first watershed of open access publishing.

What should we expect? There’s a lot of hand wringing from traditional scholarly publishers about what open access will do to their business model (blow it up, basically). But many of the claims that the strongest advocates of open access are making in order to suggest that we ought to make open access a habit will likely come to pass. Research will become more efficient. Non-researchers will be able to read the research without restriction (no subscription required, no paywall encountered). If they can’t understand a piece of research, they’ll be able to sign up for a MOOC offered by Harvard or MIT or Stanford and figure it out. Openness in general will increase, along with scientific and technological (and maybe even artistic and philosophical) literacy.

Yes, for profit scholarly publishers and most colleges and universities will end up in the same boat as the shamans and witch doctors once medicine took over in 1913. But aren’t we better off now than when one had only folk remedies and faith to rely on when one got sick?

Perhaps during this time, after the first watershed and before the second, open access can become a habit for researchers, much like getting regular exercise and eating right became habits after medicine’s first watershed. Illich’s claim is that the good times following the first watershed really are good for most of us … for a while.

Of course, there are exceptions. Shamans and witch doctors had their business models disrupted. Open access is likely to do the same for scholarly publishers. MOOCs may do the same for many universities. But universities and publishers will not go away overnight. In fact, we still have witch doctors these days.

The real question is not whether a number or a behavior marks the tipping point — crossing the first watershed. Nor is the question what scholarly publishers and universities will do if 2011 indeed marks the first watershed of openness. The real question is whether we can design policies for openness that prevent us from reaching the second watershed, when openness goes beyond a healthy habit and becomes a bane. Because once openness becomes an institutionalized bureaucracy, we won’t be talking only about peer reviewed journal articles being openly, easily, and freely accessible to anyone for use and reuse.

SPARC Innovator Award | SPARC

There is something very appealing about the simplicity of using a single number to indicate the worth of a scientific paper.

But a growing group of scientists, publishers, funders, and research organizations are increasingly opposed to the broad use of the Journal Impact Factor (JIF) as the sole measure used to assess research and researchers.

 

SPARC Innovator Award | Sparc.