In July 2015, thousands of researchers working in artificial intelligence (AI) and robotics united to issue an open letter calling for a pre-emptive ban on such weapons. I was one of the organisers of the letter, and I have spoken several times at the United Nations to reinforce our call for a ban. 

The reason I have been motivated to do this is simple. If we don’t get a ban in place, there will be an arms race. And the end point of this race will look much like the dystopian future painted by Hollywood movies like The Terminator.

In fact, the arms race is already underway, although it is largely undeclared. For example, the US Department of Defense has a US$18 billion weapons program in development and most of them are autonomous.

However, there is now considerable international political pressure for such a ban. At least 19 governments, including those of Pakistan, Mexico, Zimbabwe, Cuba and the Vatican, have formally called for a ban, and Human Rights Watch is leading a group of non-government organisations in a  “Campaign to Stop Killer Robots”.

Toby Walsh inside

Professor Toby Walsh

In December, nine members of the US House of Congress wrote to Secretary of State John Kerry and Defense Secretary Ash Carter calling for the US to vote for a ban at a UN conference on disarmament in Geneva that month. In their letter they said lethal robots “would not simply be another weapon in the world’s arsenals, but would constitute a new method of warfare”. To give their claim an historical framework, it is important to understand we are contemplating a third revolution in the history of warfare. The first was the invention of gunpowder by the Chinese, the second was the invention of the nuclear bomb, and the third – if we let it happen – will be autonomous weapons. Each is a steep change in the speed and efficiency with which we can kill the other side.

There are many problems. One is we don’t know how to build ethical robots. Another is that we don’t know how to build robots that can’t be hacked. That means such weapons can easily fall into the hands of terrorists and rogue nations. These people will have no qualms about removing any safeguards, or using them against us.

“The US Department of Defense has a US$18 billion weapons program in development and most of them are autonomous.”

And it won’t simply be robots fighting robots. Conflicts today are asymmetric. It will mostly be robots against humans. Unlike what some proponents might claim, many of those humans will be innocent civilians.

But governments still have time to choose a different future. The world has decided collectively not to weaponise other technologies. We have bans on biological and chemical weapons. Most recently, we banned certain types of blinding lasers and anti-personnel mines.

These bans have not prevented related technologies from being developed.

If you go into a hospital today, a ‘blinding’ laser will actually be used to fix your eyes. But arms companies will not sell you one. And you will not find them on any battlefield. 

The same should be true for autonomous weapons. Any ban would not stop the development of the broad technology that has many other positive uses, like in autonomous vehicles. 

But if we get a UN ban in place, autonomous weapons will have no place on the battlefield.

Last December in Geneva, 123 nations met for the Fifth Review Conference of the UN Convention on Certain Conventional Weapons and agreed to begin formal discussions on a possible ban of lethal, autonomous weapons. Those talks will begin in April or August, and 88 countries have agreed to attend.

Australia has led the way in many arms control negotiations – the nuclear non-proliferation treaty, and those around biological and chemical weapons. But Australian diplomats are some of the most resistant in the discussions about autonomous weapons. And we don’t have long. If these technologies get a foothold in our militaries, a Pandora’s box will be opened and we won’t be able to close it.

Our future is full of robots and intelligent machines. We can choose a good path, where these machines will take the sweat and we will be healthier, wealthier and happier. But if we choose another path that allows computers to make decisions that only humans should make, we risk giving up an important part of our humanity. 

In his famous novel 2001, A Space Odyssey, the novelist Arthur C. Clarke delivered one of science fiction’s most prescient quotes. When the astronaut orders the onboard computer ‘HAL’ to disconnect itself, HAL replies: “I’m sorry, Dave. I’m afraid I can’t do that.” The time has come for humans to assert themselves and say to the computers: “Sorry, I can’t let you do that.”

Versions of this article have appeared in online publication IEEE, and at Human Rights Watch.

Toby Walsh is Scientia Professor of Artificial Intelligence at UNSW. Hear him discuss the issue at the Unsomnia conference unsomnia.unsw.edu.au.

toby_walsh_unsomnia.jpg

Professor Toby Walsh delivering his Unsomnia address

 

Human-free zone

The legal pitfalls

The difficulty of programming human traits such as reason and judgement into machines means that fully autonomous weapons would likely be unable to comply reliably with international humanitarian law.

They would be unable to make the distinction between lawful and unlawful targets, or exercise judgement about proportionality of action and reaction to the situation at hand.

A further problem is the question of accountability. Who would be held responsible for the actions of autonomous weapons? Insurmountable legal and practical obstacles would prevent holding anyone responsible for unlawful harms caused by fully autonomous weapons.

The moral issues

There is a raft of persuasive moral objections to fully autonomous weapons, most notably related to their lack of judgement and empathy, threat to dignity, and absence of moral agency. They  would lack emotions, including compassion and a resistance to killing that can protect civilians and soldiers.

The military arguments

Some critics claim that military advantages would be lost with a pre-emptive ban. They argue that fully autonomous weapons could have many benefits: they could operate with greater precision than other systems, they could replace soldiers in the field and thus protect lives, they could process data and operate at greater speed than those controlled by humans, and they could also operate without a line of communication after deployment. 

Finally, fully autonomous weapons could be deployed on a greater scale and at a lower cost than weapons systems requiring human control. 

These characteristics, however, are not unique to fully autonomous weapons. Other weapons provide some of the same benefits. For example, semi-autonomous weapons have the potential for precision. They can track targets with comparable technology, but unlike their fully autonomous counterparts, these systems keep a human in the loop on decisions to fire. 

In addition, autonomous weapons are likely to destabilise the world order once they become easy to obtain.