Plan S and the Democratization of Knowledge

Issues in Science and Technology just published a piece I penned a while back on Plan S. The point of the piece is to question the extent to which Plan S is in line with the Open Science ideal of democratizing knowledge production and use.

Despite the steady progress that has been made over the decades, many [Open Access] OA advocates have become frustrated by its glacial pace and pin the blame for the delay on scholarly publishers. They argue that although technology has dramatically reduced the cost of dissemination, scholarly publishers continue to insist on both the value of traditional publications and the high cost of “quality” publishing. Publishers have also remained a step ahead of policy-makers by inventing new ways to take advantage of the push for OA. For instance, publishers developed a hybrid model that allowed the same journal to provide access to articles via the traditional subscription route, as well as via article processing charges (APCs) that would, if paid by the authors, make certain articles in the journal available OA. This hybrid model essentially enables publishers to double-dip, charging the subscriber and the author for OA articles. Policy-makers are now trying to turn the tables on publishers by putting funding agencies in charge.

To spur the pace of progress, in September 2018 a partnership of 15 European and one US-based research funding agencies formed cOAlition S and developed Plan S to make all research funded by their agencies immediately available for free for anyone to read and reuse. Slated to into effect by January 2020, Plan S could be a game-changer. But in order for it to succeed, funders beyond Europe—especially those from China and the United States—will have to join cOAlition S. China has announced its “support” for the plan but has not officially joined the coalition. In February 2019, India announced its intention to join, and Plan S architects are actively recruiting more members.

Plan S has set a lofty goal and a frenetic pace, but we would do well to remember that open access to the literature is not the ultimate aim. We should keep our eyes on the real prize—the democratization of knowledge pushed for by the champions of Open Science. OA alone is insufficient to change the practice of science to make it more responsive to society’s needs.

Comments on the piece are welcome.

Concerning CC BY mandates, part 2

In my last post, I addressed those who fail to understand why anyone would object to funders mandating CC BY licenses for all publications that result from research they support. There, I made a distinction between those who support CC BY mandates because they fail to understand objections to CC BY and those who fully understand the objections but support funder mandates of CC BY regardless.

I hope that the vast majority of those currently willing to impose a CC BY mandate as part of Plan S suffer only from a failure to understand why someone might sometimes object to a CC BY license. I fear that some, however, fully understand why someone might sometimes opt for a different license and yet feel fully justified in mandating CC BY for everyone, always. Those who understand the objections to a CC BY license mandate, yet fail to take those objections into account, are guilty of embracing domination.

I also promised a second post in which I address those who understand objections to funder CC BY mandates, yet support them nonetheless. This is that post (hence, ‘part 2’ in the title).

ON DOMINATION AND NON-DOMINATION

The language of ‘domination’ is not mine. I take it from Philip Pettit, the leading proponent of what’s sometimes called neo-republicanism. Neo-republicanism is a political theory that proposes the notion of non-domination as a sort of regulatory ideal. For those familiar with Isaiah Berlin’s “Two Concepts of Liberty,” with its ultimate defense of ‘negative’ over ‘positive’ liberty, I take Pettit’s notion of non-domination as further thinking along the same lines. Freedom as non-domination is a version of positive liberty that allows us to overcome Berlin’s argument in favor of negative liberty (or freedom as non-interference).

I issue some caveats.

First, I am not a Pettit scholar. I am not offering here the authoritative reading of Pettit. I don’t know whether Pettit would agree with my reading of him. I don’t know of any places where Pettit discusses academic freedom (but if anyone does, please let me know). I am taking Pettit’s ideas, offering my own interpretation of them, and applying them to the ongoing discussion of Plan S. There are some obvious disanalogies between being a citizen of a republic and a member of the scholarly community. So, insofar as I warp Pettit’s views, please blame me, not Pettit.

Second, I don’t want this post to turn into a philosophic tome. I will endeavor to offer the simplest reading I can of the notion of freedom as non-domination and how it applies to the discussion of Plan S. Again, any resulting distortion of Pettit’s views are my fault.

Third, I am specifically addressing the CC BY mandate of Plan S. I am not arguing against all funder mandates, nor am I arguing against open access to research (or even against Plan S). I am not arguing that the current publishing system is great. I am not arguing that no one should ever publish under a CC BY license. I am arguing that the CC BY mandate should be removed from Plan S. In addition, I also think that mandates, in general, should not dominate the population to which they are applied.

Here’s a short video of Pettit himself discussing the idea of freedom as non-domination.

The basic idea is that we lack freedom insofar as we are subject to another’s will. A slave is not free, even if the slave is subject to a master who never interferes with the slave’s choices or actions. Simply being subject to the will of a master (dominus) renders the slave subject to domination, even if the master never exercises that power. According to the neo-republican conception of freedom as non-domination, then, a slave is not free, no matter how little their master interferes in their life.

This neo-republican account of freedom as non-domination is a version of positive liberty, which Berlin ultimately rejected in favor of his notion of negative liberty, or freedom as non-interference. According to a negative liberty view, as long as the master didn’t interfere with the slave, the slave would be free. According to Pettit’s view — and my own — some measure of interference is actually compatible with freedom as non-domination.

NON-DOMINATION AS A LIMIT ON INTERFERENCE

Berlin argued in favor of negative liberty because he saw a danger in positive liberty — it can be co-opted by totalitarian impulses.

Since, as Berlin saw, positive liberty allows for interference, I can be interfered with under the guise of that interference being an actualization of my freedom. If I want to make Great Britain great again, I have to support Brexit, since that is the general will of the people (we voted leave, so have to get on board; and not doing so would make me an enemy of freedom — including my own!). The only possible antidote to this sort of totalitarian rationalization, according to Berlin, was to reject the notion of positive liberty as ‘freedom to _____’ (where the blank may be filled in as the general will decides) and embrace the notion of negative liberty as freedom from interference.

Pettit actually redescribes the notion of positive liberty that Berlin rejects in negative terms. Yes, Pettit suggests, freedom is a negative ideal; but instead of freedom from interference, it should be understood as freedom from domination. This allows Pettit to maintain the classical (positive liberty) republican idea of freedom as self-determination, without abandoning us tout court to the general will (which, after all, as Berlin clearly saw, can easily be co-opted).

According to Pettit’s notion of freedom as non-domination, then, I can tolerate interference as long as I’m not being dominated, or subjected to another’s will. Non-domination thus marks the limit of acceptable interference. Advocates of negative liberty also have to admit that a certain amount of interference is necessary — just enough to prevent me from interfering with others, so that the limits of my freedom can be found where my freedom limits the freedom of others. Of course, to an advocate of negative liberty, necessary interference is a necessary evil. A neo-republican, on the other hand, can tolerate quite a bit of interference, as long as it doesn’t rise to the level of domination.

In short, I have to have a say in the laws to which I am subject, such that the imposition of those laws does not dominate me. This doesn’t mean I have carte blanche to reject or disobey laws with which I don’t agree. If I am a member of a constitutional republic (which, as a US citizen, thankfully, I am), some decisions will go against me (alas). But Donald Trump is still my president, even if I voted for a different candidate. The results of that election don’t amount to domination, because I don’t feel I’ve been disenfranchised. Likewise, I pay taxes, both because it’s the law and because I feel I should, even though I don’t like it — again, no domination.

I accept the result of decisions or policies and follow laws, even those that go against my own wishes, just insofar as I don’t feel dominated. As long as I am not dominated, I tolerate the interference. This leaves open the possibility that one could justifiably reject decisions under which one did feel dominated (if I felt my vote weren’t counted, if I were part of a group that had been systematically disenfranchised, if government failed to provide equal protection under the law, if people in government were not subject to and didn’t follow the law, and so on). But votes and policies that go against my wishes without dominating me are simply what Pettit terms ‘tough luck’.

That I feel dominated is a sign of possible domination, but it’s not the only factor. I could be mistaken and shown to be mistaken. Maybe I feel dominated, but I should really accept the result as tough luck. We should be able to determine, in principle, whether a decision we don’t like is a matter of domination or just tough luck. How one goes about that process will depend in many ways on the context. But whether one is dominated is a matter of fact, not simply a matter of opinion.

THE PLAN S CC BY MANDATE AS DOMINATION

There are several ways in which the Plan S CC BY mandate makes me feel dominated.

First, I don’t have a vote. I grant that membership in the scholarly community is disanalogous to citizenship in a state in myriad ways. So, I shouldn’t necessarily expect the same sort of representation I have in government from my membership in the scholarly community. Public funding agencies are strange, half-government-half-academic beasts. But there are rules and norms — and even in some countries, laws — governing the relationship between the state and the academy. For the sake of brevity, let’s group all of these under the rubric of academic freedom.

Setting aside the specifics, that there exists academic freedom in general signals a special sort of relationship between government and academic institutions. Some think academic freedom means non-interference. I never have.

There are myriad ways in which the state can interfere with academic institutions and academics that do not constitute violations of academic freedom. The state can make laws or policies about how research money can or cannot be spent (a prohibition of spending public funds on alcohol or bribes or baubles, for instance); the state can make laws or policies about responsible conduct of research (requiring research on human subjects to undergo IRB review, for instance); the state (or in the US, the states) can make decisions about how much funding to put into university budgets. The list goes on. These sorts of interference do not constitute domination, in my opinion, because academics support them, of our own accord. No academic I know, for instance, would seriously argue that academic freedom implies the freedom to abuse human subjects of research.

As someone sympathetic to the neo-republican notion of freedom as non-domination, I think self-determination is vital to academic freedom. If we compare institutional OA mandates on which faculty have voted with funder mandates that some researchers resist but on which they don’t have a vote, the former respect academic freedom and the latter don’t. That’s because, in the former, the faculty voted to give themselves a mandate. For those who voted against the policy, it was just tough luck. But when we have no ability to vote, as in the case of funder mandates, we are subject to the will of the funders.

Second, on top of the fact that I don’t have a vote, I am worried that I don’t even really have a say. Although I was granted an audience with Robert-Jan Smits and Marc Schiltz and have had conversations online with them (see comments), both of which I appreciate, nothing I said seems to have made an impact. We went from Plan S, to a conversation in which they told me they heard my concerns about the CC BY mandate, to draft guidance that’s unresponsive to my concerns about the CC BY mandate. Unfortunately, I now don’t have high hopes that offering feedback to the draft guidance will have any effect, either.

Compare the cOAlition S request for feedback with what happened when the Common Rule (the US regulation governing research on human subjects) was revised. In the US, comments received detailed responses and quite often changed the proposed policy. When suggested changes were rejected, a detailed rationale was given. Can we expect those sorts of detailed responses to feedback from cOAlition S? Can we expect the implementation guidance to change much? I doubt it.

Maybe they considered my arguments and rejected them, without going into detail as to why (although that wasn’t the impression I was left with after our meeting). Maybe I’m the only one making this argument about the CC BY license mandate. But the mandate negatively affects a group — albeit a small one compared to researchers in the sciences — of researchers across the humanities (and perhaps others in some other fields). I’m not arguing only on my own behalf.

Third, I am not the only one who seems to have an issue with the CC BY mandate. In its response to Plan S, the British Academy writes:

All surveys of HSS academics indicate a substantial majority who will insist on the inclusion of a ‘No Derivatives’ (ND) element in the licence for any OA publication. The Academy thinks their concerns are fully justified.

ALLEA, a federation of almost 60 academies from 40 countries around Europe, writes in its response:

As stated in previous statements, further consultation with the research communities is needed before a licensing model is agreed upon and any
prescription should leave some choice as to the type of open license to adopt.

Insofar as the CC BY mandate systematically discriminates against a minority group of researchers, it dominates us.

Finally, and this is the most disturbing evidence of domination, there appears to be a group of my fellow researchers that views the preceding considerations as irrelevant. Some of them, at least, have fallen prey to the potential corruption of positive liberty that so concerned Berlin. They have slipped into totalitarianism. If we want to make science great again, we are told, supporting funder mandates is the only way to go.

I am not suggesting that every researcher who supports funder mandates doesn’t care about dominating other researchers. I, too, support Open Access. I support some aspects of Plan S. I even support some funder mandates, including some in Plan S, as long as they don’t dominate researchers. Ideally, funder mandates would be crafted to empower researchers.

The CC BY mandate, however, must go. The CC BY mandate must go because it dominates a group of researchers who have legitimate interests in opposing mandatory CC BY licenses. No mandate that dominates a group of researchers in that way should stand; and no researcher should stand for mandates that dominate a group of fellow researchers.

The CC BY mandate must go because it also contributes to totalitarian tendencies among a portion of the research community that sees only one way to achieve Open Access — funder mandates — and Open Access as only one thing — that which meets the definition provided by the Berlin Declaration. Funding agencies should not support such tendencies. Funders should listen to concerns expressed by the researchers they fund, especially when those concerns are expressed by a minority group of researchers.

Above all, my fellow researchers should think again before they suggest that funders mandating CC BY is the only way I can truly be free.

PASSING THE EYEBALL TEST

When Pettit discusses the eyeball test, which he introduces (along with the tough luck test discussed above) as a “user friendly” test of non-domination, he does so from the perspective of a potential slave.

The eyeball test requires that people should be so resourced and protected in the basic choices of life — for short, the basic liberties — that they can look others in the eye without reason for fear or deference.

If I can look you in the eye, then, it’s because I am not, and needn’t fear, being dominated by you.

I think we academics share a sense of basic equality as academics (i.e., we typically assume that those with terminal degrees in physics or chemistry are no more deserving of respect than those with terminal degrees in philosophy or the arts). In general, then, I think we pass the eyeball test and should expect to be able to do so.

But there seems to be another aspect to the eyeball test that Pettit doesn’t discuss — the view from the perspective of a would-be dominus. If, assuming that fellow academics deserve a basic equality of respect, we attempt to dominate some subgroup of academics, would we be able to look them in the eye? I think not. I could do so only if I were comfortable dominating them and I expected them to submit — in other words, only if I failed to see them as equals.

In other words, if, as a would-be dominus, I cannot look my fellow academics in the eye, I actually pass the eyeball test. It is because I recognize that I have attempted to dominate them and because I recognize that as wrong that I look away. Recognizing my wrong, I can correct it and refuse to dominate my fellows.

If I were fully to embrace my attempt to dominate my fellows, however, I could look them in the eye and tell them the mandate they oppose for legitimate reasons is actually for their own good. Or I could look them in they eye while telling them that, even if the mandate harms them, they should support the mandate for the greater good. To be able to look a fellow academic in the eye, expecting them to look away, is just as much a failure of the eyeball test as when someone looks away out of fear or deference.

When someone cannot look us in they eye because they feel subservient, we fail the eyeball test. When we can look another in the eye and expect them to look away, we fail the eyeball test.

In our relations with our fellow academics, we should all, always, be able to pass the eyeball test. This is why, far from supporting funder mandates that dominate a group of academics, we should listen to their concerns. We do not have a vote. We are not represented. But we can, perhaps, together, still have a say. Even if funders attempt to dominate us, we academics should stand together in resisting.

 

Draft “Guidance on the Implementation of Plan S” fails to alleviate concerns about CC BY

Earlier this week, cOAlition S released its draft Guidance on the Implementation of Plan S, which retains the requirement that publications resulting from cOAlition S funding be licensed under terms laid out in the Berlin Declaration.

For scholarly articles the public should be granted a worldwide, royalty-free, non-exclusive, irrevocable license to share (i.e. copy and redistribute the material in any medium or format) and adapt (i.e. remix, transform, and build upon the material) the work for any purpose, including commercially, provided proper attribution is given to the author.

Unfortunately, this guidance fails to respond adequately to the fact that many researchers – especially in the arts and humanities, but including also some scientists – would sometimes choose a license granting fewer reuse rights.

I know that my particular Twitter bubble doesn’t represent everyone; and it does include quite a few Open Access (OA) advocates, who tend to support the use of maximally open licenses (such as CC BY). Nonetheless, I’ve been surprised to find that most of the folks I interact with on Twitter seem either not to understand why anyone would object to mandating such an open license or have no problem supporting such a mandate over the objections of colleagues. Not understanding the objections is one thing. Not caring about the objections is quite another. I address each possibility, in turn. Here, I focus only on the problem of not understanding the objections. In a later post, I will address the problem of not caring.

First, though, a few points of clarification. The draft Guidance (at least with reference to licenses) explicitly concerns scholarly journal articles only, leaving guidance on books and book chapters until a later date.

The following guidance further specifies the principles of Plan S and provides paths for their implementation regarding scholarly articles. The guidance is directed at cOAlition S members and the wider international research community. cOAlition S will, at a later stage, issue guidance on Open Access monographs and book chapters.

I do think this makes some difference. Personally, I am much less likely to opt for a more restrictive license on scholarly articles than I would on a book. For me, book chapters are actually closer to journal articles in this regard. That doesn’t mean I fully support a license mandate for journal articles; but I do think I’d have less frequent or momentous worries about open-licensing an article or book chapter than I would a book.

For the sake of brevity, and since the draft Guidance will mandate CC BY 4.0 licenses for all scholarly articles, I use ‘CC BY’ to refer to the sort of open license mandated by the draft Guidance.

All scholarly articles that result from research funded by members of cOAlition S must be openly available immediately upon publication without any embargo period. They must be permanently accessible under an open license allowing for re-use for any purpose, subject to proper attribution of authorship. cOAlition S recommends using Creative Commons licenses (CC) for all scholarly publications and will by default require the CC BY Attribution 4.0 license for scholarly articles.

I also refer to ‘the author’ throughout this post, even though the author may not be the copyright holder of the work. Plan S initially mandated that authors retain copyright. The draft Guidance seems to have opened the door to the idea that there may be other rightsholders (such as the author’s institution, although perhaps even the publisher). I don’t address this question in this post.

For the record, I have no objection to the requirement to make journal articles immediately available and free to read (gratis). It is the option virtually all authors would choose, given the power. Insofar as Plan S “mandates” immediate gratis OA, it actually empowers authors. Mandating CC BY (or CC BY-SA or CC0 versions of libre) does not. Adopting a license mandate is a matter of domination, not empowerment. More on that, later.

One more preliminary: acknowledgements. In addition to the inhabitants of my Twittersphere, with whom I’ve had some excellent exchanges, I heard a great presentation as part of the NJIT Department of Humanities Fall Colloquium Series that spurred my thinking on this matter. On November 7, Lisa DeTora of Hofstra presented on “Not Quite Mythology: The Functional Limitations of Biomedical Language.” The views I outline below owe much to her talk. 

FAILURE TO UNDERSTAND OBJECTIONS TO CC BY

Mandating CC BY treats all articles as if they were scientific articles. But published articles mean different things in different fields. In the sciences, for instance, it is more typical to perceive the publication as simply describing the experiment or reporting on the results of the research. Although priority in publication of results is vital, scientists fully expect others to build upon, modify, and eventually abandon their results as science progresses. A CC BY or equivalent license makes some sense in the context of articles considered as merely containers for information to which the scientist is not wedded in a personal way.

But in the humanities, to paraphrase Nietzsche, nothing is impersonal. In the humanities, the publication itself actually constitutes the research. I still quote the Ancients in my own field of philosophy. That’s not because I haven’t read the latest literature. It’s because Plato and Aristotle have stood the test of time. At the risk of sounding as if I have delusions of grandeur, I hope the same for my own works.

In the sciences, articles might be thought to contain information that, like water, could be poured into various containers and retain its essential characteristics. Although one might prefer to be published in Science or Nature, ultimately, the container doesn’t matter. What matters most is the discovery, and to a lesser extent, the discoverer. Attribution is sufficient.

In the humanities, we have similar preferences for pet journals. I agree that this tendency should be corrected, and that the particular journal in which one is published should matter less than what’s published. But our conception of what’s published — of articles — is different. It’s not so easy to separate form and content in the humanities. To suggest that a philosophy article is simply information that could be transferred easily from place to place would be akin to suggesting we try to pour a marble statue — not water — into a container that could barely hold the statue’s volume of material. To manage it, one would have to destroy the statue.

One tends to attach a name to a theory or discovery in the sciences in one of a few circumstances: 1) when the discovery is new or big or a prize is awarded — as with the Higgs boson (though everyone seems uncomfortable focusing on Higgs alone, and credit is shared around); 2) when the theory has been surpassed — as with Aristotelian physics; or 3) when one wants to undermine a theory — as with creationist attacks on ‘Darwinism’.

In the humanities, attaching names to works or theories is important and routine.  Despite what you may have heard, there’s no such thing as ‘Utilitarianism’, but rather Bentham’s, Mill’s, or Singer’s versions thereof. It makes a difference whether one refers to a Jamesian or Deweyian pragmatism. The practice of naming in the humanities is not reserved for a flash in the pan or for theories that didn’t pan out. In the humanities, one’s name — including, but not limited to, one’s reputation as a good researcher — is important.

https://twitter.com/cofactoranna/status/1067774707917828100

Derivative works are a more sensitive matter in the humanities than in the sciences. One’s works present oneself — not just one’s findings — to the scholarly community. The focus of a translated work in the humanities is on the author, not on the translator. It is the author’s introduction to a new country’s readers. It is not by chance that most works in the humanities are single-authored.

Translating a work of philosophy, to pick my own field, is not something that can be done willy-nilly by an algorithm without destroying vital aspects of the work, thereby misrepresenting the author. Nor can a translation simply be thrown together by anyone who understands both languages. A good translation of a difficult text is more difficult than that.

A good translation depends on forming a relationship with the translator, such that he or she is actually familiar with the author’s work in the original language. They need to understand the subtleties of the argument and the choices the author made between terms in the original language in order to craft a good translation in the new language. Ideally, the author would have a relationship with more than one such person, so that the translation could be checked by someone the author trusts. All of this would involve a great deal of time, effort, and discussion.

A bad translation of a difficult text misrepresents the author’s views in ways that can severely damage their name. A good translation can help make it. Rumor has it that even native speakers of German read Norman Kemp Smith’s English translation of Kant’s Critique of Pure Reason. No one, however, is under the impression that the author of the work is anyone other than Kant.

On the link to the Google Books version of the Critique of Pure Reason, the preview is from Kemp Smith’s introduction, in which he describes the process of translating Kant. He was careful and meticulous and thought everything through. He studied and benefited from previous translations. He was writing commentaries on Kant — which is to say, he was a Kant scholar, not simply a Scotsman who spoke fair German. It took years. Kemp Smith’s is the sort of translation we need and generally aspire to in the humanities.

This is not to say that all translations in the humanities are good. Martin Paul Eve discusses the controversy over a particular translation of Michel Foucault, noting that perhaps some translation — even a poor or controversial one — is better than no translation at all. An author who holds such a view and has no objection to subjecting their work to rough translations can always choose a CC BY or equivalent license. Mandating that all authors explicitly grant permission for their works to be treated roughly, though, is unfair and fundamentally misunderstands the nature of research in the humanities.

The application of a more restrictive license — CC BY-ND, say, which would prevent others from making derivative works without the author’s permission — does not bar the author from granting permission for other uses in the future. Indeed, since the potential translator would actually need to contact the author to obtain permission to translate the work, such an arrangement could facilitate the formation of the sort of trusting relationship necessary for a good translation.

FAILURE TO CARE ABOUT OBJECTIONS TO CC BY

I hope that the vast majority of those currently willing to impose a CC BY mandate as part of Plan S suffer only from a failure to understand why someone might sometimes object to a CC BY license. I fear that some, however, fully understand why someone might sometimes opt for a different license and yet feel fully justified in mandating CC BY for everyone, always. Those who understand the objections to a CC BY license mandate, yet fail to take those objections into account, are guilty of embracing domination.

Since this is a strong charge, I want to take my time in making it. I also want to appeal relatively quickly to those who simply misunderstand objections to a CC BY license mandate. For these reasons, I will publish this post now and issue a promise to complete the second post later. [EDIT: part 2 now available HERE.]

I should add, briefly, that I am now rethinking my position on the CC BY license mandate. Heretofore, I had floated the possibility that cOAlition S could safely mandate a CC BY license without dominating authors as long as they were to offer a no-questions-asked waiver. Providing such an opt-out option would mirror the Harvard Open Access Policy, which I discuss in some detail here. Upon further review, however, I am leaning toward the position that even a mandate with a waiver would dominate authors. If anyone is interested in how my thinking has developed and cannot wait for the second post on this topic, here’s a hint.

 

 

 

Camp Engineering Education AfterNext

This looks like fun!

Where are the senior women in STEM? | Dawn Bazely

So, here\’s the thing: I\’m a female Biology professor, and when I was an undergraduate (1977-81 UofT), there were more or less 50:50 male to female students in my classes. This bottom-up input of women into Biology has been happening for decades. So, thirty years on, where are the other female Full Professors? In fact, where are the senior women in the government, industry and even in Biology-related NGOs?

via Where are the senior women in STEM? | Dawn Bazely.

Lethal Autonomous Robots (“Killer Robots”) | Center for Ethics & Technology | Georgia Institute of Technology | Atlanta, GA

Lethal Autonomous Robots (\”Killer Robots\”)

Monday, 18 November 2013 05:00 pm to 07:00 pm EST

Location: 

Global Learning Center (in Tech Square), room 129

WATCH the simultaneously streamed WEBCAST at: 

http://proed.pe.gatech.edu/gtpe/pelive/tech_debate_111813/

Debate and Q&A for both

Lethal Autonomous Robots (LARs) are machines that can decide to kill. Such a technology has the potential to revolutionize modern warfare and more. The need for understanding LARs is essential to decide whether their development and possible deployment should be regulated or banned. Are LARs ethical?

via Lethal Autonomous Robots ("Killer Robots") | Center for Ethics & Technology | Georgia Institute of Technology | Atlanta, GA.

What does it mean to prepare for life in ‘Humanity 2.0’?

Francis Rememdios has organized a session at the 4S Annual Meeting in which he, David Budtz Pedersen, and I will serve as critics of Steve Fuller’s book Preparing for Life in Humanity 2.0. We’ll be live tweeting as much as possible during the session, using the hashtag #humanity2 for those who want to follow. There is also a more general #4s2013 that should be interesting to follow for the next few days.

Here are the abstracts for our talks:

Humanity 2.0, Synthetic Biology, and Risk Assessment

Francis Remedios, Social Epistemology Editorial Board member

As a follow-up to Fuller’s Humanity 2.0, which is concerned with the impact of biosciences and nanosciences on humanity, Preparing for Life in Humanity 2.0 provides a more detailed analysis. Possible futures are discussed are: the ecological, the biomedical and the cybernetic. In the Proactionary Imperative, Fuller and Lipinska aver that for the human condition, the proactionary principle, which is risk taking, is an essential part should be favored over the precautionary principle, which is risk aversion. In terms of policy and ethics, which version of risk assessment should be used for synthetic biology, a branch of biotechnology? With synthetic biology, life is created from inanimate material. Synthetic biology has been dubbed life 2.0. Should one principle be favored over the other?

The Political Epistemology of Humanity 2.0
David Budtz Pedersen, Center for Semiotics, Aarhus University
In this paper I confront Fuller’s conception of Humanity 2.0 with the techno-democratic theories of Fukuyama (2003) and Rawls (1999). What happens to democratic values such as inclusion, rule of law, equality and fairness in an age of technology intensive output-based policymaking? Traditional models of input democracy are based on the moral intuition that the unintended consequences of natural selection are undeserved and call for social redress and compensation. However, in humanity 2.0 these unintended consequences are turned into intended as an effect of bioengineering and biomedical intervention. This, I argue, leads to an erosion of the natural luck paradigm on which  standard theories of distributive justice rest. Hence, people can no longer be expected to recognize each other as natural equals. Now compare this claim to Fuller’s idea that the welfare state needs to reassure the collectivization of burdens and benefits of radical scientific experimentation. Even if this might energize the welfare system and deliver a new momentum to the welfare state in an age of demographic change, it is not clear on which basis this political disposition for collectivizing such scientific benefits rests. In short, it seems implausible that the new techno-elites, that has translated the unintended consequence of natural selection into intended, will be convinced of distributing the benefits of scientific experiments to the wider society. If the biosubstrate of the political elite is radically different in terms of intelligence, life expectancy, bodily performance etc. than those disabled, it is no longer clear what the basis of redistribution and fairness should be. Hence, I argue that important elements of traditional democracy are still robust and necessary to vouch for the legitimacy of humanity 2.0.
Fuller’s Categorical Imperative: The Will to Proaction
J. Britt Holbrook, Georgia Institute of Technology
Two 19th century philosophers – William James and Friedrich Nietzsche – and one on the border of the 18th and 19th centuries – Immanuel Kant – underlie Fuller’s support for the proactionary imperative as a guide to life in ‘Humanity 2.0’. I make reference to the thought of these thinkers (James’s will to believe, Nietzsche’s will to power, and Kant’s categorical imperative) in my critique of Fuller’s will to proaction. First, I argue that, despite a superficial resemblance, James’s view about the risk of uncertainty does not map well onto the proactionary principle. Second, however, I argue that James’s notion that our epistemological preferences reveal something about our ‘passional nature’ connects with Nietzsche’s idea of the will to power in a way that allows us to diagnose Fuller’s ‘moral entrepreneur’ as revelatory of Fuller’s own  ‘categorical imperative’. But my larger critique rests on the connection between Fuller’s thinking and that of Wilhelm von Humboldt. I argue that Fuller accepts not only Humboldt’s ideas about the integration of research and education, but also – and this is the main weakness of Fuller’s position – Humboldt’s lesser recognized thesis about the relation between knowledge and society. Humboldt defends the pursuit of knowledge for its own sake on the grounds that this is necessary to benefit society. I criticize this view and argue that Fuller’s account of the public intellectual as an agent of distributive justice is inadequate to escape the critique of the pursuit of knowledge for its own sake.

PLOS Biology: Expert Failure: Re-evaluating Research Assessment

Do what you can today; help disrupt and redesign the scientific norms around how we assess, search, and filter science.

via PLOS Biology: Expert Failure: Re-evaluating Research Assessment.

You know, I’m generally in favor of this idea — at least of the idea that we ought to redesign our assessment of research (science in the broad sense). But, as one might expect when speaking of design, the devil is in the details. It would be disastrous, for instance, to throw the baby of peer review out with the bathwater of bias.

I touch on the issue of bias in peer review in this article (coauthored with Steven Hrotic). I suggest that attacks on peer review are attacks on one of the biggest safeguards of academic autonomy here (coauthored with Robert Frodeman). On the relation between peer review and the values of autonomy and accountability, see: J. Britt Holbrook (2010). “Peer Review,” in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press: 321-32 and J. Britt Holbrook (2012). “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan. Beijing: People’s Publishing House: 328 – 62. 

Coming soon …

Image

– Featuring nearly 200 entirely new entries

– All entries revised and updated

– Plus expanded coverage of engineering topics and global perspectives

– Edited by J. Britt Holbrook and Carl Mitcham, with contributions from consulting ethics centers on six continents

Two Watersheds for Open Access?

This past week I taught “Two Watersheds,” a chapter from Ivan Illich’s Tools for Conviviality. I got some interesting reactions from my students, most of whom are budding engineers. But that’s not what this post is about.

I do want to talk a bit about Illich’s notion of the two watersheds, however. Illich illustrates the idea with reference to medicine. Illich claims that 1913 marks the first watershed in medicine. This is so because in 1913, one finally had a greater than 50% chance that someone educated in medical school (i.e., a doctor) would be able to prescribe an effective treatment for one’s ailment. At that point, modern medicine had caught up with shamans and witch doctors. It rapidly began to outperform them, however. And people became healthier as a result.

By the mid 1950s, however, something changed. Medicine had begun to treat people as patients, and more and more resources were devoted to extending unhealthy life than to helping keep people healthy or to restoring health. Medicine became an institutionalized bureaucracy rather than a calling. Illich picks (admittedly arbitrarily) 1955 to mark this second watershed.

Illich’s account of the two watersheds in medicine is applicable to other technological developments as well.

A couple of weeks ago, Richard Van Noorden published a piece in Nature the headline of which reads “Half of 2011 papers now free to read.” Van Noorden does a good job of laying out the complexities of this claim (‘free’ is not necessarily equivalent to ‘open access’, the robot used to gather the data may not be accurate, and so on), which was made in a report to the European Commission. But the most interesting question raised in the piece is whether the 50% figure represents a “tipping point” for open access.

The report, which was not peer reviewed, calls the 50% figure for 2011 a “tipping point”, a rhetorical flourish that [Peter] Suber is not sure is justified. “The real tipping point is not a number, but whether scientists make open access a habit,” he says.

I’m guessing that Illich might agree both with the report and with Suber’s criticism, but that he might also disagree with both. But let’s not kid ourselves, here. I’m talking more about myself than I am about Illich — just using his idea of the two watersheds to make a point.

The report simply defines the tipping point as more than 50% of papers available for free. This is close enough to the way Illich defines the first watershed in medicine. So, let’s suppose, for the sake of argument, that what the report claims is true. Then we can say that 2011 marks the first watershed of open access publishing.

What should we expect? There’s a lot of hand wringing from traditional scholarly publishers about what open access will do to their business model (blow it up, basically). But many of the claims that the strongest advocates of open access are making in order to suggest that we ought to make open access a habit will likely come to pass. Research will become more efficient. Non-researchers will be able to read the research without restriction (no subscription required, no paywall encountered). If they can’t understand a piece of research, they’ll be able to sign up for a MOOC offered by Harvard or MIT or Stanford and figure it out. Openness in general will increase, along with scientific and technological (and maybe even artistic and philosophical) literacy.

Yes, for profit scholarly publishers and most colleges and universities will end up in the same boat as the shamans and witch doctors once medicine took over in 1913. But aren’t we better off now than when one had only folk remedies and faith to rely on when one got sick?

Perhaps during this time, after the first watershed and before the second, open access can become a habit for researchers, much like getting regular exercise and eating right became habits after medicine’s first watershed. Illich’s claim is that the good times following the first watershed really are good for most of us … for a while.

Of course, there are exceptions. Shamans and witch doctors had their business models disrupted. Open access is likely to do the same for scholarly publishers. MOOCs may do the same for many universities. But universities and publishers will not go away overnight. In fact, we still have witch doctors these days.

The real question is not whether a number or a behavior marks the tipping point — crossing the first watershed. Nor is the question what scholarly publishers and universities will do if 2011 indeed marks the first watershed of openness. The real question is whether we can design policies for openness that prevent us from reaching the second watershed, when openness goes beyond a healthy habit and becomes a bane. Because once openness becomes an institutionalized bureaucracy, we won’t be talking only about peer reviewed journal articles being openly, easily, and freely accessible to anyone for use and reuse.