AI Applications in the Military

AI Applications in the Military
By Stephen Holtz | March 27, 2020

In recent years countries across the world have started developing applications for artificial intelligence and machine learning for their militaries. Seven key countries lead in military applications of AI: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea, with each one developing and researching weapons systems with greater autonomy.

This development indicates there is a need for leaders to examine the legal and ethical implications of this technology. Potential applications range from optimization routines for logistics planning to autonomous weapons systems that can identify and attack targets with little or no intervention from humans.

The debate has started from a number of sources. For example, “[t]he Canadian Armed Forces is committed to maintaining appropriate human involvement in the use of military capabilities that can exert lethal forces.” Unfortunately for the discussion at hand, the Canadian Armed Forces does not define what ‘appropriate human involvement’ means.

China has proposed a ban on the use of AI in offensive weapons, but appears to want to keep the capability for defensive weapons.

Austria has openly called for a ban on weapons that don’t have meaningful human control over critical functions, including the selection and engagement of a target.

South Korea has deployed the Super aEgis II machine gun which can identify, track, and destroy moving targets at a range of 4 kilometers. This technology can theoretically operate without human intervention and has been in use with human control since 2010.

Russia has perhaps been the most aggressive in its thinking about AI applications in the military, having proposed concepts such as AI-guided missiles that can switch targets mid-flight, autonomous AI operation systems that provide UAVs with the ability to ‘swarm’, autonomous and semi-autonomous combat systems that can make its own judgements without human intervention, unmanned tanks and torpedo boats, robot soldiers, and ‘drone submarines.’

While the United States has multiple AI combat programs in development including an autonomous warship, the US Department of Defence has put in place a directive that requires a human operator to be kept in the loop when taking human life by autonomous weapons systems. This directive implies that the same rules of engagement that apply to conventional warfare also applies to autonomous systems.

Similar thinking is applied by the United Kingdom government in opposing a ban on lethal autonomous weapons, stating that current international humanitarian law already provides sufficient regulation for this area. The Uk armed forces also exerts human oversight and control over all weapons they employ.

The challenge with the development of ethical and legal systems to manage the development of autonomous weapons systems is that game theory is at play and the debate is not simply about what is right and wrong, but about who can exert power and influence over others. Vladimir Putin is quoted as having said in 2017 that “Artificial Intelligence is the future, not only for Russia but for all humankind… Whoever becomes the leader in this sphere will become the ruler of the world.” With the severity of the problem so succinctly put by Russia’s President, players need to evaluate the game theory before deciding on their next move.

Clearly, in a world where all parties cooperate and can be trusted to abide by the agreed rules in deed and intent, the optimal solution is for each country to devise methods of reducing waste, stengthening their borders, and learning from eachother’s solutions. In this world, the basic ethical principles of the Belmont Report are useful for directing the research and development of military applications of AI. Respect for Persons would lead militaries to reduce waste through optimization, and to build defensive weapons systems. Beneficence and justice would lead militaries to focus on disaster response functions that they are all-too-often called upon to fulfill. Unfortunately, we do not always live in this world.

Should nations assess that by collaborating with other nations they expose themselves to exploitation and domination by bad actors, they will start to develop a combination of defensive, offensive, and counter-AI measures that would breach the principles shared in the Belmont report.

PORTLAND, Ore. (Apr. 7, 2016) Sea Hunter, an entirely new class of unmanned ocean-going vessel gets underway on the Williammette River following a christening ceremony in Portland, Ore. Part the of the Defense Advanced Research Projects Agency (DARPA)’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program, in conjunction with the Office of Naval Research (ONR), is working to fully test the capabilities of the vessel and several innovative payloads, with the goal of transitioning the technology to Navy operational use once fully proven. (U.S. Navy photo by John F. Williams/Released)160407-N-PO203-598
Join the conversation:
http://www.navy.mil/viewGallery.asp
http://www.facebook.com/USNavy
http://www.twitter.com/USNavy
http://navylive.dodlive.mil
http://pinterest.com
https://plus.google.com

Perhaps the most disturbing possibilities in the autonomous weapons systems are those that involve genocide committed by a faceless, nameless machine that has been disowned by all nations and private individuals. Consider the ‘little green men’ that have fought in the Donbass region of the Ukraine since 2014. Consider also the genocides that have occurred in the Balkans, Rwanda, Sudan, and elsewhere in the last fifty years. Now combine the two stories whereby groups are targetted and killed and there is no apparent human who can be tied to the killings. Scenarios like these should lead the world to broader regulatary systems whereby the humans who are capable of developing such systems are identified, registered, and subject to codes of ethics. Further, these scenarios call for a global response force to combat autonomous weapons systems should they be put to their worst uses. Finally, the global response force that identifies and responds to rogue or disowned autonomous weapons systems must develop the capability to conduct forensic investigations of the autonomous weapons systems to determine the responsible party and to hold it to account.

Works Cited

https://mwi.usma.edu/augmented-intelligence-warrior-artificial-intelligence-machine-learning-roadmap-military/

https://business.financialpost.com/pmn/business-pmn/two-of-canadas-ai-gurus-warn-of-war-by-algorithm-as-they-win-tech-nobel-prize

https://ploughshares.ca/2019/05/more-clarity-on-canadas-views-on-military-applications-of-artificial-intelligence-needed/

https://www.researchgate.net/publication/335422076_Militarization_of_AI_from_a_Russian_Perspective

https://futureoflife.org/ai-policy-russia/?cn-reloaded=1

Russian AI-Enabled Combat: Coming to a City Near You?

https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF

https://smallwarsjournal.com/jrnl/art/emerging-capability-military-applications-artificial-intelligence-and-machine-learning

https://www.cfc.forces.gc.ca/259/290/405/192/elmasry.pdf

https://www.cfc.forces.gc.ca/259/290/308/192/macdonald.pdf

The Army Needs Full-Stack Data Scientists and Analytics Translators 

https://www.eda.europa.eu/webzine/issue14/cover-story/big-data-analytics-for-defence

https://en.wikipedia.org/wiki/Artificial_intelligence_arms_race

Leave a Reply