The race for killer robots

0
257
A MQ-9 Reaper unmanned aerial vehicle prepares to land after a mission in support of Operation Enduring Freedom in Afghanistan. The Reaper has the ability to carry both precision-guided bombs and air-to-ground missiles. (U.S. Air Force photo/Staff Sgt. Brian Ferguson)

The international arms trade is at the threshold of a new bonanza as the world’s major powers race to explore how to kill their enemies without losing their own soldiers. This utopian wish of rulers ever since warfare began is now nearing fulfilment.

Author: Brij Khindaria

Russian President Vladimir Putin, who commands the world’s most lethal military after the United States, asserted in September that future wars could consist of battles among autonomous drones. Referring to artificial intelligence (AI), which is the driving force of unmanned robots, he warned that “the one who becomes the leader in this sphere will be the ruler of the world.”

Many red flags were raised in 2017 about artificial intelligence because of rising capabilities of killer robots, formally called “Lethal Autonomous Weapons Systems” or LAWS. But their central bond is to warfare not technology.

Modern wars no longer seem to have a start or end, unlike the great wars of the 20th century and earlier. Warfare, whether against terrorists, insurgents or States, always uses the best available technologies; LAWS could be inevitable next steps in this continuum to high-tech weapons.

A race is underway because LAWS are theoretically capable of making split-second decisions to kill and destroy much faster than human soldiers. Countries with such weapons could win non-nuclear wars with minimal or no damage to their own warriors, people and territory.

So, they may not hesitate to start pre-emptive wars against countries without LAWS.  Weaker non-nuclear states would have little or no deterrence capabilities and may have to surrender to a threatened LAWS attack without a bullet being fired.

The biggest winners from this greater military insecurity will be those who make and sell LAWS, which are likely to be exorbitantly expensive. Companies in the US, Russia, China and a few European countries would get windfalls.

Clarity is needed urgently on core problems, including whether LAWS can be trusted to distinguish between friendly forces, innocent civilians and armed enemies simply because they are capable of self-learning on-the-go and unlike humans are free from self-doubt.

There are serious military and legal questions about accountability for mistakes or massacres in the fog of war, and ethical issues concerning intelligent machines overruling human decisions.

Human Rights Watch thinks even the cleverest killer robot “would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians”.

Nobody seems precisely clear on how Lethal Autonomous Weapons Systems might be characterized and defined. The International Committee for Robot Arms Control suggests ta simple definition as systems that “once launched can select targets and apply violent force without meaningful human control”. But agreement is still out of sight.

The Campaign to Stop Killer Robots has called for international negotiations on a legally binding agreement by the end of 2019 to ban the development, production, and use of fully autonomous weapons. But that is far from certain because LAWS are driven by AI, which can deliver many vital civilian benefits and create much-desired jobs.

PricewaterhouseCoopers said in a 2017 report that technology related to AI is expected to increase global economic output by 14 per cent or US$15.7 trillion by 2030. The Chinese economy could see a 26 per cent boost.

China, the world’s third military power, is rushing to catch up with the US in artificial intelligence. In his keynote speech to the ruling Communist Party Congress in October 2017, President Xi Jinping declared that China would become an “innovation center for AI” by 2030. The State Council has laid out goals for a US$150 billion domestic AI industry in coming years.

“We need to speed up building China into a strong country with advanced manufacturing, pushing for deep integration between the real economy and advanced technologies including internet, big data, and artificial intelligence,” Xi said.

But getting there may not be easy. Although China’s spending on technology R&D rose 10.6 per cent to US$238.1 billion or 2.1 per cent of GDP in 2016, the Wuzhen Institute estimated financing received by the AI sector at US$2.6 billion in 2012-2016 compared with US$17.9 billion in the US. China also has only 709 AI companies compared with 2,905 in the US.

Remarkably, AI has been driven by private enterprises rather than defense department funding, unlike earlier innovations, such as the Internet, Wi-Fi and remote guidance systems.  It was born in commercial uses led by companies like Google, Amazon and Apple, including driverless cars, autonomous flying vehicles and drones for delivering packages and mapping hard-to-reach territories and ocean beds.

All artificial intelligence is dual use and adapts to killer robots. Early examples include the Taranis, an unmanned combat aircraft being developed by BAE, and the autonomous SGR-A1 robot sentry gun made by Samsung and deployed in South Korea. It can detect and kill targets 2km away. Autonomous tanks and defensive systems for warships and more lethal drones are also being deployed and improved.

None are truly autonomous yet since a human controller makes the key decisions. But 126 renowned AI experts sounded very loud alarms in a 2017 letter signed among others by Elon Musk, founder of Tesla, SpaceX, OpenAI (USA), Mustafa Suleyman, founder of Google’s DeepMind (UK), and Jüergen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland).

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter said.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

They “implored” members of the 1980 Convention on Certain Conventional Weapons (CCW), to “find a way to protect us all from these dangers”. The CCW has 125 contracting governments including US, Russia, China, France and Britain, the five veto-holders in the United Nations Security Council. It restricts use of “certain conventional weapons” that are “deemed to be excessively injurious or to have indiscriminate effects”.

In 2016, CCW set up a new Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems, which held its first meeting in Geneva in November 2017 attended by 450 delegates from companies, think tanks, nongovernment groups and 86 governments. They had early-stage discussions ranging from characteristics of LAWS to an outright ban, lesser limitations or no action yet. About 22 governments supported calls for an outright ban.

The precedent for banning weapons before they can be acquired or used was set by a 1995 protocol of CCW that prohibits blinding lasers. Opponents of killer robots want a similar preemptive ban, but they face an uphill struggle.