2013 was an exciting first year for the Campaign to Stop Killer Robots. As we return from the holidays and get started on 2014, it is helpful to take a quick look back at 2013 to see how far we’ve come.
The Campaign to Stop Killer Robots was launched in April 2013 in London. Mines Action Canada is a co-founder of the campaign and a member of its Steering Committee along with other disarmament, human rights and humanitarian organizations.
In May, the first Human Rights Council debate on lethal autonomous robotics followed the presentation of a report by the UN special rapporteur, Christof Heyns, on extra-judicial killings. During the debate 20 governments make their views known for the first time.
A University of Massachusetts survey of 1,000 Americans found a majority oppose fully autonomous weapons and support actions to campaign against them. In August, the International Committee of the Red Cross issued a “new technologies” edition of its quarterly journal. The journal included articles by campaigners on fully autonomous weapons.
During the UN General Assembly First Committee on Disarmament and International Security in New York in October, 16 governments made statements on killer robots. Also in October, campaign member the International Committee for Robot Arms Control launched a letter from over 250 roboticists, scientists and other experts calling for a ban on autonomous weapons.
In November at the Convention on Conventional Weapons (CCW) in Geneva, 35 nations express their views on lethal autonomous weapons systems. States parties to the Convention on Conventional Weapons agreed to a mandate to begin work in 2014 on the emerging technology of “lethal autonomous weapons systems.”
Mines Action Canada (MAC) welcomed this historic decision to begin to address this issue. MAC encouraged all states to pursue an international ban on these weapons to ensure there will always be meaningful human control over targeting decisions and the use of violent force. We were also pleased that Canada made its first public statements on this topic during the CCW joining the other 43 nations who have spoken out on fully autonomous weapons since May. “ If we have learned anything from the Canadian led efforts to ban landmines, it is that the world cannot afford to wait until there is a humanitarian crisis to act. We need a pre-emptive ban on fully autonomous weapons before they can cause a humanitarian disaster,” said Paul Hannon, Executive Director, Mines Action Canada in a press release.
Our colleagues around the world have also seen exciting developments in their countries. The international campaign has put together a global recap.
Canada does not have a national policy on autonomous weapons. There are many reasons why Canada needs to have a policy on killer robots as soon as possible. This year, MAC looks forward to working with the Government of Canada to develop a national policy and to work towards an international treaty banning killer robots.
Today the States Parties to the Convention on Conventional Weapons (CCW) agreed to convene a meeting to discuss fully autonomous weapons or killer robots in May 2014. Mines Action Canada (MAC), a co-founder of the Campaign to Stop Killer Robots, welcomes this historic decision to begin to address this issue. MAC encourages all states to pursue an international ban on these weapons to ensure there will always be meaningful human control over targeting decisions and the use of violent force.
We are pleased that Canada made its first public statements on this topic during the CCW joining the other 43 nations who have spoken out on fully autonomous weapons since May. MAC looks forward to working with the Government of Canada to develop national policies on fully autonomous weapons. Along with our colleagues from the Campaign to Stop Killer Robots we hope to see Canada actively participate in the CCW discussions. Mines Action Canada encourages Canada to take on a leadership role in international efforts to ban fully autonomous weapons and ensure that humans will always have meaningful control over life and death decisions in conflict.
“If we have learned anything from the Canadian led efforts to ban landmines, it is that the world cannot afford to wait until there is a humanitarian crisis to act. We need a preemptive ban on fully autonomous weapons before they can cause a humanitarian disaster,” said Paul Hannon, Executive Director, Mines Action Canada.
Canadians are among the 270 engineers, computing and artificial intelligence experts, roboticists, and professionals from related disciplines who have signed an experts’ call to ban killer robots. The experts say “given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines.”
The International Committee for Robot Arms Control (ICRAC) has thus far received 272 signatures from 37 countries on the statement which continues to collect signatures. In an announcement released today, Professor Noel Sharkey, Chair of ICRAC said “Governments need to listen to the experts’ warnings and work with us to tackle this challenge together before it is too late. It is urgent that international talks get started now to prevent the further development of autonomous robot weapons before it is too late.”
Canada does not currently have a policy on fully autonomous weapons and we hope that the government will engage these experts and others as they create the policy. We expect to see additional signatures from Canadian experts as this issue gains momentum. At present, the University of Toronto has the largest numbers of signatories but experts from other organizations and institutions still have time to sign the call. As the quote below from Geoffrey Hinton indicates now is the time to ensure that artificial intelligence and robotic technologies are used for the betterment of humanity.
“Artificial Intelligence can improve people’s lives in so many ways, but researchers need to push for positive applications of technology by supporting a ban on autonomous weapons systems.”
Geoffrey Hinton FRS, [founding father of modern machine learning] Raymond Reiter Distinguished Professor of Artificial Intelligence at the University of Toronto.
The Campaign to Stop Killer Robots has been trundling along all summer sharing our message, reaching out to governments and gaining new supporters.
There have been some exciting and important developments over the summer. The International Committee of the Red Cross (ICRC) launched the newest edition of the International Review of the Red Cross and the theme is New Technologies and Warfare. A number of campaigners contributed to the journal so it is definitely worth a read. The ICRC also published a Frequently Asked Questions document on autonomous weapons that helps explain the issue and the ICRC’s position on fully autonomous weapons.
France along with the United Nations Office for Disarmament Affairs in Geneva convened a seminar on fully autonomous weapons for governments and civil society in early September. The Campaign to Stop Killer Robots had campaigners taking part and you can read the full report on the global campaign’s website.
The campaigns in Germany and Norway are starting off strong as well. In the lead up to the German election, all the major parties shared their policy positions in regards to fully autonomous weapons with our colleagues at Facing Finance. Norwegian campaigners launched their campaign with a breakfast seminar and now they are waiting to hear what the new Norwegian government’s policy on fully autonomous weapons will be.
Like our colleagues in Norway, we’re still waiting to hear what Canada’s policy on fully autonomous weapons will be. We have written to the Ministers of National Defense and of Foreign Affairs but the campaign team has not yet heard back. In the meantime, Canadians can weigh in on the topic through our new online petition. Share and sign the petition today! This petition is the first part of a new initiative that will be coming your way in a few weeks. Keep your eye out for the news and until then keep sharing the petition so that the government knows that Canadians have concerns about fully autonomous weapons and believe that Canada should have a strong policy against them.
EDIT: We had a very human moment here and forgot to include congratulations to James Foy of Vancouver for winning the 2013 Canadian Bar Association’s National Military Law Section Law School Sword and Scale Essay Prize for his essay called Autonomous Weapons Systems: Taking the Human out of International Humanitarian Law. It is great to see law students looking at this new topic and also wonderful that the Canadian Bar Association recognized the importance of this issue. Congratulations James!
Professor Noel Sharkey gave a talk at TEDx Sheffield about fully autonomous weapons and the Campaign to Stop Killer Robots. Prof. Sharkey is one of the founders of the International Committee for Robot Arms Control. He delivers a passionate call to action to stop killer robots. Take a few minutes out of your day to see him talk about his journey from a boy who loved toy soldiers growing up in the shadow of World War II to a leading campaigner in the effort to stop killer robots and protect civilians. Plus he even shares a little song about the CIA!
Last month at the United Nations Human Rights Council, we were slightly concerned when the UK was the only state opposed to a moratorium or a ban on fully autonomous weapons. After a parliamentary debate on June 17, 2013, we have a little more clarity. In response to a speech by Nia Griffith, MP, the Minister for Counter Proliferation, Alistair Burt MP, agreed that fully autonomous weapons will not “be able to meet the requirements of international humanitarian law” and stressed that the UK does not have fully autonomous weapons and does not plan to acquire any.
Our colleagues at Article 36 have done a detailed analysis of the debate. In light of the stronger language in this debate, there is some room to be optimistic
"It would seem straightforward to move from such a strong national position to a formalised national moratorium and a leading role within an international process to prohibit such weapons. The government did not provide any reason as to why a moratorium would be inappropriate, other than to speculate on the level of support amongst other countries for such a course of action.
Whilst significant issues still require more detailed elaboration, Article 36 believes this parliamentary debate has been very valuable in prompting reflection and Ministerial scrutiny of UK policy on fully autonomous weapons and narrowing down the areas on which further discussions should focus. It appears clear now that there will be scope for such discussions to take place with the UK and other states in the near future."
The UK parliamentary debate and Article 36’s analysis of it, coming so soon after the Human Rights Council debate and the widespread media coverage of the issue make it quite clear that it is time to have such a substantive and non-partisan debate in the Canadian House of Commons as the government works out its policy on this important issue.
Science fiction author Daniel Suarez, went to TED Global in Edinburgh to talk about lethal autonomous weapons. He brings up some interesting arguments for a global ban on killer robots including the impact on democracy of giving decisions over life and death to machines and the possibility of anonymous warfare where it is impossible to know who is behind an attack. Although he is not a member of the Campaign to Stop Killer Robots, he is a supporter.
This talk is definitely worth a watch.
This week, the United Nations Human Rights Council became the first UN body to discuss the issue of killer robots. To mark the occasion, the Campaign to Stop Killer Robots headed to Geneva to introduce our campaign to diplomats, UN agencies and civil society. Check out the full report from the international campaign.
In the weeks since the Campaign to Stop Killer Robots launched, there has been a lot of media coverage. The media coverage is very exciting and what I have found to be very interesting is the number of articles that refer to Isaac Asimov’s Three Laws of Robotics.
Now unless like me you grew up with a sci-fi geek for a father who introduced you to various fictional worlds like those in Star Wars, Star Trek and 2001: A Space Odyssey at a young age, you might not know who Isaac Asimov is, what his Three Laws of Robotics are and why these laws are relevant to the Campaign to Stop Killer Robots.
Isaac Asimov (1920-1992) was an American scientist and writer, best known for his science fiction writings especially short stories. In his writings, Asimov created the Three Laws of Robotics which govern the action of his robot characters. In his stories, the Three Laws were programmed into robots as a safety function. The laws were first stated in the short story Runaround but you can see them in many of his other writings and since then they have shown up in other authors’ work as well.
The Three Laws of Robotics are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
After reading the Three Laws, it might be pretty clear why Mr. Asimov’s ideas are frequently mentioned in media coverage of our campaign to stop fully autonomous weapons. A fully autonomous weapon will most definitely violate the first and second laws of robotics.
To me, the Three Laws seem to be pretty common sense guides for the actions of autonomous robots. It is probably a good idea to protect yourself from being killed by your own machine – ok not probably – it is a good idea to make sure your machine does not kill you! It also is important for us to remember that Asimov recognized that just regular robots with artificial intelligence (not even fully autonomous weapons) could pose a threat to humanity at large so he also added a fourth, or zeroth law, to come before the others:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
“But Erin,” you say, “these are just fictional stories; the Campaign to Stop Killer Robots is dealing with how things really will be. We need to focus on reality not fiction!” I hear you, but since fully autonomous weapons do not yet exist we need to take what we know about robotics, warfare and law and add a little imagination to foresee some of the possible problems with fully autonomous weapons. Who better to help us consider the possibilities than science fiction writers who have been thinking about these types of issues for decades?
At the moment, Asimov’s Three Laws are currently the closest thing we have to laws explicitly governing the use of fully autonomous weapons. Asimov’s stories often tell tales of how the application of these laws result in robots acting in weird and dangerous ways the programmers did not predict. By articulating some pretty common sense laws for robots and then showing how those laws can have unintended negative consequences when implemented by artificial intelligence, Asimov’s writings may have made the first argument that a set of parameters to guide the actions of fully autonomous weapons will not be sufficient. Even if you did not have a geeky childhood like I did, you can still see the problems with creating fully autonomous weapons. You don’t have to read Asimov, know who HAL is or have a disliking for the Borg to worry that we won’t be able to control how artificial intelligence will interpret our commands and anyone who has tried to use a computer, a printer or a cell phone knows that there is no end to the number of ways technology can go wrong. We need a pre-emptive ban on fully autonomous weapons before it is too late and that is what the Campaign to Stop Killer Robots will be telling the diplomats at the UN in Geneva at the end of the month.
All the discussions we’ve been having since the launch of the Campaign to Stop Killer Robots make me think about Alice in Wonderland and therefore I’ve been thinking a lot about rabbit holes. I feel like current technology has us poised at the edge of a rabbit hole and if we take that extra step and create fully autonomous weapons we are going to fall – down that rabbit hole into the unknown, down into a future where a machine could make the decision to kill you, down into a situation that science fiction books have been warning us about for decades.
The best way to prevent such a horrific fall is going to be to create laws and policies that will block off the entrance to the rabbit hole so to speak. At the moment, not many countries have policies to temporarily block the entrance and no one has laws to ban killer robots and close off the rabbit hole permanently. It is really only the US and the UK who have even put up warning signs and a little bit of chicken wire around the entrance to this rabbit hole of killer robots through recently released policies and statements.
Over the past few weeks our colleagues at Human Rights Watch (HRW) and Article 36 have released reports on the US and UK policies towards fully autonomous weapons (killer robots). HRW analyzed the 2012 US policy on autonomous weapons found in Department of Defense Directive Number 3000.09. You can find the full review online. Article 36 has a lot to say about the UK policy in their paper available online as well.
So naturally after reading these papers, I went in search of Canada’s policy. That search left me feeling a little like Alice lost in Wonderland just trying to keep my head or at least my sanity in the face of a policy that like the Cheshire Cat might not be all there.
After my futile search, it became even more important that we talk to the government to find out if Canada has a policy on fully autonomous weapons. Until those conversations happen, let’s see what we can learn from the US and UK policies and the analysis done by HRW and Article 36.
The US Policy
I like that the US Directive notes the risks to civilians including “unintended engagements” and failure. One key point that Human Rights Watch’s analysis highlights is that the Directive states that for up to 10 years the US Department of Defense can only develop and use fully autonomous weapons that have non-lethal force. The moratorium on lethal fully autonomous weapons is a good start but there are also some serious concerns about the inclusion of waivers that could override the moratorium. HRW believes that “[t]hese loopholes open the door to the development and use of fully autonomous weapons that could apply lethal force and thus have the potential to endanger civilians in armed conflict.”
In summary Human Rights Watch believes that:
"The Department of Defense Directive on autonomy in weapon systems has several positive elements that could have humanitarian benefits. It establishes that fully autonomous weapons are an important and pressing issue deserving of serious concern by the United States as well as other nations. It makes clear that fully autonomous weapons could pose grave dangers and are in need of restrictions or prohibitions. It is only valid for a limited time period of five to ten years, however, and contains a number of provisions that could weaken its intended effect considerably. The Directive’s restrictions regarding development and use can be waived under certain circumstances. In addition, the Directive highlights the challenges of designing adequate testing and technology, is subject to certain ambiguity, opens the door to proliferation, and applies only to the Department of Defense." 
In terms of what this all means for us in Canada, we can see there may be some aspects of the American policy that are worth adopting. The restrictions on the use of lethal force by fully autonomous weapons should be adopted by Canada to protect civilians from harm without the limited time period and waivers. I believe that Canadians would want to ensure that humans always make the final decision about who lives and who dies in combat.
The UK Policy
Now our friends at Article 36 have pointed out the UK situation is a little more convoluted – and they are not quite ready to call it a comprehensive policy but since “the UK assortment of policy-type statements” sounds ridiculous, for the purposes of this post I’m shortening it to the UK almost-policy with the hope that one day it will morph into a full policy. Unlike the US policy which is found in a neat little directive, the UK almost-policy is cobbled together from some statements and a note from the Ministry of Defense.
To sum up Article 36 outlines three main shortcomings of the UK almost-policy:
- The policy does not set out what is meant by human control over weapon systems.
- The policy does not prevent the future development of fully autonomous weapons.
- The policy says that existing international law is sufficient to “regulate the use” of autonomous weapons.
One of the most interesting points that Article 36 makes is the need for a definition of what human control over weapons systems means. If you are like me, you probably think that would be that humans get to make the decision to fire on a target making the final decision of who lives or who dies but we need to know exactly what governments mean when they say that humans will always been in control. The Campaign to Stop Killer Robots wants to ensure that there is always meaningful human control over lethal weapons systems.
Defining what we mean by meaningful human control is going to be a very large discussion that we want to have with governments, with civil society, with the military, with roboticists and with everyone else. This discussion will raise some very interesting moral and ethical questions especially since a two-star American general recently said that he thought it was “the ultimate human indignity to have a machine decide to kill you.” The problem is once that technology exists it is going to be incredibly difficult to know where that is going to go and how on earth we are going to get back up that rabbit hole. For us as Canadians it is key to start having that conversation as soon as possible so we don’t end up stumbling down the rabbit hole of fully autonomous weapons by accident.