Mines Action Canada’s intervention on Article 36 weapons reviews
Today Mines Action Canada Program Coordinator made an intervention during CCW discussions about autonomous weapons systems and weapons review processes.
Thank you Madame Chair. I would like to take this opportunity to share Mines Action Canada’s observations about Article 36 reviews.
Like many others, Mines Action Canada was concerned to learn that there was so little transparency around Article 36 weapons reviews at last year’s experts meeting. The fact that so few states were willing to discuss their weapons review process is a significant impediment to the prevention of humanitarian harm caused by new weapons. Indeed it seems that too few states actually undertake these reviews in a comprehensive manner.
Last year’s revelations concerning Article 36 reviews have made it clear that international discussions on the topic are necessary. Today is a start. States need to be more transparent in their weapons review processes. Sharing criteria and standards or setting international standards will do much to shed light on the shadowy world of arms procurement. Mines Action Canada believes that Article 36 weapons reviews should be a topic of discussion at the international level to strengthen both policy and practice around the world.
However, better weapons reviews will not solve the problems associated with autonomous weapons systems for a number of reasons.
First, there is the issue of timing. A successful international process to increase the effectiveness of weapons reviews will require a significant amount of time – time we do not have in the effort to prevent the use of autonomous weapons systems because technology is developing too rapidly.
Second, weapons reviews were designed for a very different type of weapon than autonomous weapon systems which have been called the third revolution in warfare. Autonomous weapons systems will blur the line between weapon and soldier to a level that may be beyond the ability of a weapons review process. In addition, the systemic complexity that will be required to operate such a weapons system is a far cry from the more linear processes found in current weapons.
Third, Article 36 reviews are not obligated to cover weapons used for domestic purposes outside of armed conflict such as policing, border control, or crowd control. Mines Action Canada, along with many civil society organizations and states present here, have serious concerns about the possible use of autonomous weapons systems in law enforcement and uses outside of armed conflict more generally.
Fourth and most importantly, weapons reviews cannot answer the moral questions surrounding delegating the kill decision to a machine. An Article 36 review cannot tell us if it is acceptable for an algorithm to kill without meaningful human control. And that is one of the key questions we are grappling with here this week.
Article 36 weapons reviews are a legal obligation for most of the states here. It is time for a separate effort to strengthen the standards and transparency around weapons reviews. That effort must neither distract from nor overtake our work here to deal with the real moral, legal, ethical and security problems associated with autonomous weapons systems. Weapons reviews must be supplemented by new and robust international law that clearly and deliberately puts meaningful human control at the centre of all new weapons development.
The concerns raised by autonomous weapons are urgent and must take priority. In fact, a GGE next year on autonomous weapons will greatly assist future work on weapons reviews by highlighting the many challenges new technologies pose for such reviews.
Overall, there is a need for international work to improve Article 36 reviews but there is little evidence to back up the claims of some states that weapons review processes would be sufficient to ensure that autonomous weapons systems are acceptable. Article 36 reviews are only useful once questions of the moral and ethical acceptability of a weapon have been dealt with. Until that time, it would be premature to view weapons review as a panacea to our issues here at CCW.
Thank you.
Opening Statement at CCW in 2016
Our Executive Director, Paul Hannon delivered an opening statement at the CCW meeting on autonomous weapons systems today.
Thank you, Chairperson.
I appreciate the opportunity to speak on behalf of Mines Action Canada. Mines Action Canada is a Canadian disarmament organization that has been working to reduce the humanitarian impact of indiscriminate weapons for over twenty years. During this time, we have worked with partners around the world including here at the CCW to respond to the global crisis caused by landmines, cluster munitions, and other indiscriminate weapons. What makes this issue different is we have an opportunity to act now before a weapon causes a humanitarian catastrophe.
As a co-founder of the Campaign to Stop Killer Robots, Mines Action Canada’s concern with the development of autonomous weapons systems runs across the board. We have numerous legal, moral/ethical, technical, operational, political, and humanitarian concerns about autonomous weapons systems. The question of the acceptability of delegating death is not an abstract thought experiment, but is the fundamental question with policy, legal and technological implications for the real-world. We must all keep this question at the fore whenever discussing autonomous weapons systems: do you want to live in a world where algorithms or machines can make the decision to take a life? War is a human activity and removing the human component in war is dangerous for everybody. We strongly support the position of the Campaign to Stop Killer Robots that permitting machines to take a human life on the battlefield or in policing, border or crowd control, and other circumstances is unacceptable.
We have watched the development of discourse surrounding autonomous weapons systems since the beginning of the campaign. 2015 saw a dramatic expansion of the debate into different forums and segments of our global community and that expansion and the support it has generated have continued into 2016. Be it at artificial intelligence conferences, the World Economic Forum, the Halifax Security Forum or in the media, the call for a pre-emptive ban is reaching new audiences. The momentum towards a pre-emptive ban on autonomous weapons systems is clearly growing.
Mines Action Canada recognizes that there are considerable challenges facing the international community in navigating legal issues concerning an emerging technology. The desire to not hinder research and development into potentially beneficial technologies is understandable, but a pre-emptive ban on autonomous weapons systems will not limit beneficial research. As a senior executive from a robotics company representative told us at a workshop on autonomous weapons last week, there are no other applications for an autonomous system which can make a “kill or not kill” decision. The function providing an autonomous weapon the ability to make the “kill decision” and implement it does not have an equivalent civilian use. A pre-emptive ban would have no impact on the funding of research and development for artificial intelligence nor robotics.
On the other hand there are numerous other applications that would benefit society by improving other aspects of robot weapons while maintaining meaningful human control over the decision to cause harm. Communications technology, encryption, virtual reality, sensor technology – all have much broader and beneficial applications, from search and rescue by first responders to watching a school play when you can’t be there in person. None of that research and development would be hindered by a pre-emptive ban on autonomous weapons systems. A pre-emptive ban would though allow governments, private sector and academics to direct investments towards technologies which can have as much future benefit to non-military uses as possible.
While the “kill decision” function is only necessary for one application of robotic technology, predictability is an important requirement for all robots regardless of the context in which they are used. Manufacturing robots work well because they work in a predictable space. Driverless cars will also work in a predictable space though much less predictable than a factory, which is one of the reasons they require so much more testing and time to develop. Robotic weapons will be required to work in the least predictable of spaces, that is in combat and, therefore, are much more prone to failure. Commanders on the other hand need weapons they can rely on. Civilians need and have a right to expect that every effort is taken to protect them from the harmful effects of conflict.
Mines Action Canada appreciates the significant number of expert presentations scheduled for this week but we hope that states will take time to share their views throughout the week. It is time for states to begin to talk about their concerns, their positions and their policies. For this reason, we are calling on the High Contracting Parties to take the next step later this year at the Review Conference and mandate a GGE with a mandate to negotiate a new protocol on autonomous weapons.
We note that in the last 20 years three new legal instruments have entered into force. Each bans a weapon system and each was covered by the general rules of International Humanitarian Law at the time, but the international community felt that new specific laws banning these weapons was warranted. This not only strengthened the protection of civilians, but also made IHL more robust.
Autonomous weapons systems are not your average new weapon; they have the potential to fundamentally alter the nature of conflict. As a “game-changer” autonomous weapons systems deserve a serious and in-depth discussion. That discussion should also happen at the national level. Mines Action Canada hopes that our country will begin that effort this spring through the recently announced defence review and that other states will follow suit with their own national discussions.
At the core of this work is a desire to protect civilians and limit the humanitarian harm caused by armed conflict. We urge states not to lose sight of the end goal and their motivations as they complete the difficult work necessary for a robust and effective pre-emptive ban.
Thank you.
CCW – What happened last year?
By Claudia Pearson
With the third and hopefully final Convention on Conventional Weapons (CCW) informal experts meeting coming up in a couple days, it is important to remind ourselves of what was discussed last year and what work still needs to be done.
The gathering of the CCW member states and organisations in Geneva in April 2015 was designed as a forum at which states could discuss the important technical, legal, moral and ethical issues surrounding autonomous weapons, otherwise known as ‘killer robots’.
At the 2015 meetings, almost all states that spoke agreed that further work is necessary and desirable and many expressed that no autonomous weapons should be allowed to operate without meaningful human control. Nor with human control that is ‘devoid of meaning.’ There were however a small number of states who were more reserved regarding the eventual achievement of a pre-emptive ban on autonomous weapons. The US and Israel implied that they plan to leave the door open for the future acquisition of these weapons. While France and the UK stated that they would not pursue killer robots but still neither indicated support for the logical conclusion of a pre-emptive ban.
Another important notion that arose from the CCW 2015 meetings was the fact that autonomous weapons or killer robots are not an inevitable piece of weaponry and should never be allowed to become an inevitable piece of weaponry. This notion was a useful counterpoint to some interventions that seemed to under-estimate to value and importance of human soldiers.
Further, the CCW focused heavily on norm creation, with members emphasising the need to establish norms in order to efficiently discuss and articulate what is most disturbing and threatening about the possibility of autonomous weapons use. Once these norms are clearly established and accepted by a majority of states, hopefully there will be a more concerted effort to transform these norms into fully ratified international laws.
Finally, multiple countries and organisations identified the need to define what exactly some of the key terms commonly used at the conference meant. For example, what exactly is meant by ‘meaningful human control’? Further explorations of this principle could be a key component of a Group of Governmental Experts in 2017 leading to a process to prevent the use of fully autonomous weapons through law.
Hopefully, this year some more solid definitions can be agreed upon and a Group of Governmental Experts will be called for next year so the process of banning autonomous weapons through international law can be accelerated leading to a pre-emptive ban.
Claudia Pearson is an undergraduate student at the University of Leeds, currently studying abroad at the University of Ottawa.
2016: A year for action
We’re almost a month into 2016 and autonomous weapons systems have already been in the news thanks to a strong panel discussion at the World Economic Forum in Davos. The Campaign to Stop Killer Robots was pleased to see the panel agree that the world needs a diplomatic process to pre-emptively ban autonomous weapons systems started soon. You can read the whole analysis by the Campaign’s coordinator here.
Yes 2016 is starting on a high note for the campaign but this is not the time to be complacent. We need to keep that momentum going internationally and here in Canada. The new government has yet to share a national policy on autonomous weapons systems. Before the election, the Liberal Party of Canada wrote that:
“Emerging technologies such as Lethal Autonomous Weapon Systems pose new and serious ethical questions that must be studied and understood. The Liberal Party of Canada will work with experts and civil society to ensure that the Canadian Government develops appropriate policies to address the use and proliferation of autonomous weapon systems.”
Now that the Liberals form the government, they will have to develop “appropriate policies” soon because the international community is moving forward, albeit verrrrrry slowly. States are meeting in April 2016 for a third (and hopefully final) informal experts meeting on autonomous weapons systems under the United Nations’ Convention on Conventional Weapons and then at the end of the year, states will have the opportunity to start negotiations on a pre-emptive ban. The UN process has been called “glacial” and that it “shows no sense of urgency” but there’s time for states to pick up the pace and Canada can take a leadership role.
Canadian industry, academics and NGOs have already taken a leadership role on banning autonomous weapon systems so now it’s the government’s turn. The Canadian government and Prime Minister Trudeau made a big impression at the World Economic Forum so we hope that they will take that energy forward to act on one of newest issues discussed there. Let’s make 2016 a year of action on autonomous weapons systems.
Video Contest Winner Announced
Next week, states will decide if and how they will continue international talks on autonomous weapons systems at the UN`s Convention on Conventional Weapons in Geneva. We and the whole Campaign to Stop Killer Robots are calling on states to take the next step towards a ban by agreeing to a Group of Governmental Experts (GGE) in 2016. A GGE will allow states to explore the issues surrounding autonomous weapons systems in depth.
With such an important decision looming over states, we are launching the winners of our youth video contest. Last week, we shared the runner-up video.
Today, we are pleased to announce that Steven Hause of Florida State University won the video contest. Steven’s video covers a number of the key concerns the Campaign has about autonomous weapons systems. We hope that this video will remind governments of the need to take action at CCW next week.
Video Contest – Runner Up Announced
In less than two weeks, states will decide if and how they will continue international talks on autonomous weapons systems at the UN`s Convention on Conventional Weapons in Geneva. We and the whole Campaign to Stop Killer Robots are calling on states to take the next step towards a ban by agreeing to a Group of Governmental Experts.
With such an important decision looming over states, we are launching the winners of our youth video contest. This week, we are pleased to present the runner-up video (and top high school video) by Daryl, Henry, Joseph and Anders at Petersburg High School.
Please feel free to share widely!
We thank all those who submitted videos to the contest and congratulate Daryl, Henry, Joseph and Anders on their excellent video. Come back next week to see the winning entry.
Guest Post- Mind Over Machine: Why Human Soldiers are (and Will Remain) Better than Killer Robots
By Andrew Luth
This summer, movie-goers are flocking to theatres to see tales of superheroes, dinosaurs, and plucky college singing groups. Two of the season’s biggest movies, Avengers: Age of Ultron and Terminator Genisys have more in common than an over-reliance on computer-generated visual effects. Both feature killer robots: advanced weapons systems capable of fighting and killing independent of human command. Killer robots have been a staple of popcorn flicks for decades, but these days movies aren’t the only place we can expect to see them turning up. Many of the world’s most advanced militaries are getting closer and closer to producing killer robots of their own. Killer robots or autonomous weapons systems (AWS) are machines capable of identifying and attacking targets without human intervention. Despite the moral and legal concerns about such weapons, leading scientists and engineers are warning that AWS may be only a few years away from reality. The few who support the development of AWS tend to view them as inherently superior to human soldiers. Robots, they argue, don’t get tired or emotional, and are more expendable than human soldiers. As University of Massachusetts-Amherst Professor Charli Carpenter explains, some supporters have even gone so far as to say that “robots won’t rape,” overlooking the reality that rape and other war crimes are often ordered military tactics. All such arguments assume AWS will make better soldiers than humans. However, they fail to fully consider how human soldiers are actually superior to AWS. Several attributes of human physiology and behaviour give human soldiers the edge over autonomous weapons systems not just now, but for the foreseeable future.
According to the international legal principle of distinction, belligerent parties must distinguish between civilians and combatants when using force in combat. Human soldiers have a significant advantage over artificial systems in meeting this requirement. The human brain and eye work in tandem to process complex visual information incredibly quickly and efficiently. This skill is invaluable on the battlefield, enabling soldiers to pick out subtle distinctions in shape, colour, texture, and movement from long distances and use that information to their advantage. Technology is developing quickly and it is conceivable that computers will someday rival our visual processing powers, but no computer program has yet come close to human abilities to pick out patterns and identify objects even in motionless two dimensional images. Even further out of the realm of possibility for robotics is the brain’s aptitude for reading human behaviour. The human mind is particularly attuned to reading tiny changes in expression and body language even subconsciously. This is immensely important in combat scenarios, where soldiers need to determine an unknown party’s intent almost instantly, with fractions of a second making the difference between life and death. The science of computer vision is advancing rapidly, but it is likely to be decades before AWS can even approach the visual acuity of human soldiers, if ever.
Even if scientists eventually develop autonomous weapons systems with visual processing skills superior to our own, a human soldier would still have many advantages over killer robots. The highly flexible and adaptive nature of the human mind is perhaps the most distinct advantage. This flexibility allows us to receive and process information both from our natural senses and external sources. In addition to acquiring information by communicating with other soldiers, humans can quickly learn to integrate data from radar, night vision, infrared, and other technologies. Furthermore, to analyze this information human soldiers draw on a wealth of learning and experience from all areas of life. Robots, however, are generally designed to analyze specific information sources using pre-determined metrics, making it impossible for them to evaluate or even to detect unanticipated information. In many situations, the success of a mission could balance on the ability to respond to such information.
The human mind’s flexibility also means soldiers can perform any number of activities a situation requires. This is invaluable during military conflict. In his famous work The Art of War, Chinese military strategist Sun Tzu explains “just as water retains no constant shape, so in warfare there are no constant conditions.” Truly successful military tactics, he writes, are “regulated by the infinite variety of circumstances.” Humans are well-equipped to respond to this infinite variety. A modern infantry soldier can fire a rifle accurately, provide emergency medical aid, accept a prisoner’s surrender, operate a vehicle, assess enemy tactics, and perform any number of other necessary tasks. Robots however are specialists, designed to respond to a specific scenario or perform a single task, often in controlled environments. In his recent piece on killer robots for Just Security, retired Canadian military officer John MacBride quotes famed German military theorist Helmuth von Moltke’s observation that “no operation extends with any certainty beyond the first encounter with the main body of the enemy.” When a mission’s parameters change quickly, human minds learn and adapt, developing creative solutions to novel problems. However, when robots meet unanticipated challenges, they often fail spectacularly, necessitating significant human intervention. As MacBride explains, this is distinct cause for concern. There are bound to be programming flaws and oversights when a machine developed years in advance under controlled conditions makes its debut on a battlefield. IBM’s famed computing system Watson illustrated this perfectly during its star turn on the television game show Jeopardy!. Despite its dominant win over two human champions, in response to a question in the Final Jeopardy category of US Cities, Watson answered ‘Toronto’. Such failure is humourous in a game show setting, but the consequences of a similar error on the battlefield could be deadly.
In spite of Watson’s amazing performance, its failures demonstrate that neither human beings nor technological systems can be perfect. Whether out of fatigue, emotion, prejudice, or simple lack of information, human soldiers can and do make poor decisions. When these mistakes result in the deaths of fellow soldiers or innocent civilians, judicial systems are in place to hold military personnel accountable for their unethical behaviour or poor judgement. If AWS are deployed it is inevitable they too will perpetrate atrocities, whether from programming error, technical failure, or unpredictable variables. However, our society has no recourse for crimes committed by robots. Our justice system rests upon punishing immoral acts, but an autonomous weapons system has about as much sense of right and wrong as a toaster. Robots lack the capacity to make ethical decisions, acting only as their programming dictates. Nonetheless, a crime perpetrated by a robot is still a crime. Should society therefore pursue justice with the programmer? The commander? Or would leaders deem certain levels of ‘collateral damage’ acceptable and overlook any atrocities perpetrated by an AWS?
Our respect for the capacity of others to make moral choices is one among many reasons we value human life so highly. As such, the supporters of autonomous weapons systems often claim the best argument for AWS adoption is the potential they have to reduce human casualties. This assertion is tenuous at best. Given that autonomous weapons systems would already require remote oversight and operation capabilities, it would be a simple matter of procedure to give human operators final approval over the use of lethal force on a given target. It is unlikely fully ceding authority over weapons systems to computers would do anything to make military personnel safer. In fact, AWS might actually increase the likelihood of military engagement. Operating an AWS is far cheaper than training and deploying a human soldier, making them relatively expendable. Having access to relatively cheap and easily-replaced military assets significantly lowers the political and financial costs of military action, making states more likely to wage war in the first place. We have already witnessed the advent of this trend with the proliferation of unmanned military drones. Drone technology now allows leaders to conduct military campaigns abroad while their citizens pay little attention. Autonomous weapons systems could take this trend to its extreme, with robots conducting foreign bombing raids or assassinations with little human involvement. Protecting military personnel is a worthy goal, but our aversion to the human cost of war is the reason we place such high value on peace in the first place. Each tragic loss of life compels a society to consider the worthiness of its cause. Sending robots to do the killing externalizes the horrific consequences of war, making governments more willing to wage wars and less concerned with ending them.
We live in a world that sometimes forces us to take human lives. For thousands of years, some of humanity’s greatest minds have worked to develop philosophical and ethical frameworks to guide our decisions in war. Recently however, it has been difficult for us to keep pace with technology’s rapid proliferation. As technology revolutionizes all aspects of society, we can scarcely consider the social and ethical consequences of each new development before it arrives. The advent of nuclear weapons, the internet and countless other scientific advances all bear witness to our ethical tardiness. Although scientists are now making huge breakthroughs in robotics and artificial intelligence, no matter how skilled robots become at distinguishing between targets, we owe it to ourselves and all of humanity to fully consider each decision to use deadly force. Passing this choice off to an amoral machine would be unethical by definition. We currently live in a world where killer robots appear only in movies and other works of fiction, but it may not be long before they make the jump from movie screens to the real world. The international community must take action and ban these immoral weapons before they become a reality.
After graduating from Calvin College in Grand Rapids, Michigan, Andrew Luth spent two years living and working in China. He is currently pursuing his master’s degree at Carleton University’s Norman Paterson School of International Affairs in Ottawa, Canada. His academic interests include disarmament, conflict analysis and resolution, and the Asia-Pacific region.
Guest Post – Killer Robots in Geneva: Through the Ottawa Looking Glass
By Michael Binnington
After the last informal meeting of experts in Geneva on killer robots (or as they prefer to call them “lethal autonomous weapon systems”) wrapped up it is an appropriate time to take stock of what we learned from the conference. A lot of ground was covered in Geneva, too much to cover in one short blog post, but there were a few ideas that received a lot of attention that are worth mentioning here.
First and foremost the idea of ‘meaningful human control’ got a lot of attention from all sides in the debate. So what is meaningful human control and how does that impact the debate on killer robots? Simply put, meaningful human control means that a human will always be the one that makes the decision whether or not to use force. There are three ways in which these systems are often described: human ‘in the loop’, human ‘on the loop’ and human ‘out of the loop’. A system with humans ‘out of the loop’ is the type of system that can target and use force without any human control and is the type of system that the Campaign to Stop Killer Robots seeks to ban. Systems with humans ‘on the loop’ give humans the ability to monitor the activity of the weapon and stop it if necessary. However, these systems may not furnish the decision maker with enough time to assess the information reported by the weapon. Finally, systems with humans ‘in the loop’ are more akin to traditional weapon systems, where the decision to use force rests firmly with a human operator.
The discussion of meaningful human control was linked to discussions about whether or not it was ethical or moral to delegate life and death decisions to machines. Some criticize this approach on the basis that meaningful human control isn’t a legal standard, or is too vague, but that criticism misses the point. This moral and ethical consideration is at the heart of the debate on killer robots; if only strict legal standards were applied then the ability and function of the technology would begin to determine how it is used. Strictly applying legal standards may approve the use of killer robots in areas that seemingly have no impact on civilians such as in outer space. Once such a precedent was set it would be difficult to stop the full use of killer robots.
After meaningful human control, the arguments made against a pre-emptive ban on killer robots formed a consistent theme throughout the conference, no matter the specific subject at hand. The refrain goes something like this, “We don’t know how this technology will evolve, so a pre-emptive ban could deprive the world of potentially useful technologies”. There is a concrete example of this not happening (the ban on blinding laser weapons), and various other treaties with dual-use implications have proven that banning a class of weapon does not adversely impact commercial or industrial activity. The Chemical Weapons Convention, which was discussed, provides a good example of how an export-control regime and competent verification can stop the spread of chemical weapons, while maintaining the ability of states to develop chemical industries.
Clearly then, neither of these two things should stop us from a pre-emptive ban on killer robots. As a co-founder of the Campaign to Stop Killer Robots, Mines Action Canada encourages all of you to engage with the issue and to advocate for a ban with your friends, family, local politician and anyone else who wants to listen. An easy way to start would be signing and sharing our petition to Keep Killer Robots Fiction here: http://killerrobots-minesactioncanada.nationbuilder.com/.
Michael Binnington is a M.A. Candidate at Norman Paterson School of International Affairs and a Research Associate at Mines Action Canada.
CCW Closing Statement
Executive Director Paul Hannon delivered our closing statement at the Convention on Conventional Weapons today. Download the statement here or read it below.
The Way Forward
Thank you Mr. Chair and your team for the strong foundation to move forward with the urgency and focus this issue requires. This week we have seen wide-ranging discussions on autonomous weapons systems. The CCW does not often enough deal with issues of morality, human rights and ethics. We welcome all states who have asserted the necessity of maintaining meaningful human control over the use of force. These conversations should continue and deepen.
There is one issue we would like to raise as food for thought. At times during the week, we have felt that some have underestimated the skills, knowledge, intelligence, training, experience, humanity and morality that men and women in uniform combine with situational awareness and IHL to make decisions during conflict. We work closely with roboticists, engineers, and technical experts and despite their expertise and the high quality of their work we do not believe an algorithm could replicate this complex decision making process. Robotics should only be used to inform and supplement human decision making. To go further than that risks “dehumanizing those we expose to harm” as RCW’s CCW Report’s editorial stated yesterday.
Allow me to conclude with the assertion that the international response to the possibility of autonomous weapons systems must not be limited to transparency alone. The expert presentations and the debates this week have strengthened our belief that autonomous weapons systems are not a typical new weapon and our current IHL and weapons review processes will not be sufficient. A mandate for a group of governmental experts next year is an appropriate and obvious next step. We look forward to working with the high contracting parties to ensure that meaningful human control remains at the centre of all decisions to use violent force.
Let’s not throw the baby out with the bathwater!
Today at the Convention on Conventional Weapons meeting about lethal autonomous weapons systems, Mines Action Canada released a new memo to delegates on the impact of autonomous weapons systems on public trust in robotics. In this memo we discuss how the creation and use of autonomous weapons systems could change public perception of robotics more generally. Read the memo here and let us know what you think!
Will the use of killer robots make you more or less likely to want other autonomous robots in your life?