The High Risks of AI and Robotic Weapons

All Global Research articles can be read in 51 languages by activating the Translate Website button below the author’s name (only available in desktop version).

To receive Global Research’s Daily Newsletter (selected articles), click here.

Click the share button above to email/forward this article to your friends and colleagues. Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.

Global Research Wants to Hear From You!

***

Recent tragic experiences in Gaza have confirmed the fears of those scientists who have been warning about the use of AI and robotics in weapon systems and warfare. The use of AI in targeting systems was supposed to facilitate more precise targeting and thereby spare innocent civilians– using Lavender system to target personnel and Gospel system to target buildings where militants were supposed to be located. However this AI technology, used hastily, has actually resulted in a lot of indiscriminate killing of innocent civilians.

This is not the only recent example of high risk use of AI weapons. Highly scary but fact-based warnings by well-recognized experts have been followed by increasing investments by big powers to strengthen their preparations for developing a wide range of robot weapons.

While some of the civilian applications of robots have also faced increasing criticism regarding fears of large scale unemployment likely to be caused by them in several lines of work in a world already suffering from the adverse impacts of jobless growth, the adverse impacts of military use of robots are likely to be much more dangerous. Yet one of the arguments given for not checking military use of robot weapons (also called lethal autonomous weapons or LAWs) is that work for civilian and military use of robots, particularly in the context of scientific research and innovation, can be closely related. The message given is that as civilian research on robots advances, there will be accompanying implications for military use of robots which cannot be ignored by any leading military power.

Hence on the one hand it is stated that civilian advances in robots by cutting costs and offering other narrow advantages regardless of social costs will inevitably lead to the spread of robotics in civilian applications and on the other hand it is stated that the military possibilities that arise in the context of this technological development will equally inevitably be used by various military establishments in various parts of the world. Of course once the importance of robots for military becomes clearer and clearer, then military establishments will also invest a lot in purely military development of robots. This stage has already been reached in the case of the leading military powers of the world. Hence we are in a situation where despite increasing expert warning about robot weapons, the trend towards development of more and more destructive robot weapons appears unstoppable just now. In the USA several new start-ups are appearing to take forward the increasing willingness of Pentagon to invest in AI weapons, and other powers are unlikely to lag behind.   

As early as 2012-13 as a part of the efforts of the International Committee for Robot Arms Control as many as 270 computing experts, AI experts and engineers had called for a ban on the development and deployment of weapon systems that make the decision to apply violent force autonomously, without any human control. They said clearly that the decision about the application of violent force should not be delegated to machines. These experts questioned how devices controlled by complex algorithms will interact, warning such interactions could create unstable and unpredictable behavior that can initiate or escalate conflicts or cause unjustifiable and serious harm to civilian populations.              

In August 2017 as many as 116 specialists from 26 countries, including some of world’s leading robotics and artificial intelligence pioneers, called on the United Nations to ban the development and use of killer robots. They wrote,

“Once developed lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons than despots and terrorists use against innocent population, and weapons hacked to behave in undesirable ways.”

“We do not have long to act.” This letter warned. “Once this Pandora’s box is opened, it will be hard to close.”

Ryan Gariepy, the founder of Clearpath Robotics, has said,

“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapon systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”

The Economist (January 27, 2017) noted in its special report titled ‘The Future of War’,

“At least the world knows what is like to live in the shadow of nuclear weapons. There are much bigger question marks over how the rapid advances in artificial intelligence (AI) and deep learning will affect the way wars are fought, and perhaps even the way people think of war. The big concern is that these technologies may create autonomous weapon systems that can make choices about killing humans independently of those who created or deployed them.”

This special report distinguished between three types of AI weapons or robot weapons

(i) in the loop (with a human constantly monitoring the operation and remaining in charge of critical decisions,

(ii) on the loop (with a human supervising machines that can intervene at any stage of the mission) or

(iii) out of the loop (with the machine carrying out the mission without any human intervention once launched).

Fully autonomous robot weapons (third category) are obviously the most dangerous. 

A letter warning against the coming race of these weapons was signed in 2015 by over 1000 AI experts. An international campaign called ‘Campaign to Stop Killer Robots’ is working on a regular basis for this and related objectives. Elon Musk has stated that competition for AI superiority at national level as the “most likely cause of World War 3.”

Stephen Hawking, Elon Musk and many other experts said in a joint statement that, handled badly, AI as weapon could be an existential threat to the human race. 

Paul Scharre, an expert on autonomous weapons, has written that “collectively, swarms of robotic systems have the potential for even more dramatic, disruptive change to military operations.” One possibility he mention is that tiny 3D-printed drones can be formed into smart clouds that can permeate a building or be air-dropped over a wide area to look for hidden enemy forces.

In my novel A Day in 2071 I visualize such a situation in which powerful elites use such a force of very tiny robot soldiers to suppress a revolt of common people.

Several countries are surging ahead with rapid advances in robot weapons. In 2014 the Pentagon announced its ‘Third Offset Strategy’ with its special emphasis on robotics, autonomous systems and ‘big data’. This is supposed to help the USA to maintain its military superiority. In July 2017 China presented its “Next-Generation Artificial-Intelligence Development Plan”, which gives a crucial role to AI as the transformative technology in civil as well as military areas, with emphasis on ‘military-civil fusion’.

The campaign called Stop Killer Robots wants a legally binding international treaty banning LAWs. But there are certain questions whether this can be effective without the big military powers signing it and these big powers are going ahead with big investments in robot weapons. Certainly whatever efforts that are being made at present to check robot weapons should continue and should be strengthened but beyond this it is also important to take a very serious look at why our world, the way it is organized at present, is increasingly found to be incapable of checking some of the most dangerous threats.

*

Note to readers: Please click the share button above. Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.

Bharat Dogra is Honorary Convener, Campaign to Save Earth Now. His recent books include Planet in Peril, Protecting Earth for Children, A Day in 2071 and Earth without Borders. He is a regular contributor to Global Research.

Featured image source


Comment on Global Research Articles on our Facebook page

Become a Member of Global Research


Articles by: Bharat Dogra

Disclaimer: The contents of this article are of sole responsibility of the author(s). The Centre for Research on Globalization will not be responsible for any inaccurate or incorrect statement in this article. The Centre of Research on Globalization grants permission to cross-post Global Research articles on community internet sites as long the source and copyright are acknowledged together with a hyperlink to the original Global Research article. For publication of Global Research articles in print or other forms including commercial internet sites, contact: [email protected]

www.globalresearch.ca contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of "fair use" in an effort to advance a better understanding of political, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than "fair use" you must request permission from the copyright owner.

For media inquiries: [email protected]