Can a sound ethics policy framework mend the fractured relationships between the US Military and Silicon Valley?

Can a sound ethics policy framework mend the fractured relationships between the US Military and Silicon Valley?
By Anonymous | October 28, 2019

The U.S. Department of Defense has recently begun searching for an ethicist with a deep understanding of artificial intelligence in response to the fallout from the much-maligned Project Maven, the showcase AI project between the Pentagon and Google. Project Maven was designed to apply AI in assisting intelligence officers in their analysis of drone footage with the overarching involvement being relegated to non-combat uses. However, the mere involvement of Google with the DoD sparked a protest among 3,100 Google employees fearing the technology could be used in lethal operations, resulting in a signed petition urging Google CEO Sundar Pichai to end the project and spurred on a philosophical debate as to whether or not the tech community would contribute to military operations.

Google employees petition against Project Maven

The military and the private sector partnership dates back to the Revolutionary War, where Robert Morris utilized his personal funds and credit to provide supplies, food, and transportation for the Continental Army. The capabilities of these contractors to quickly pivot to military needs through the delivery of surge support, their expertise in specialized fields, ability to free up military personnel, all at a lower cost than maintaining a permanent in-house capability has led to a long history of reliance on the private sector. As these industrial giants were the backbone of national defense in years past, the advent of autonomous capabilities has led to serious reservations among the most innovative sector of the American economy with Elon Musk arguing that AI is more dangerous than nuclear warheads at the South by Southwest tech conference in March 2018.

The push for further collaboration between the military and the private sector stems from a growing fear among U.S. military officials and respected technologists that the U.S. is at risk of losing an AI arms race to China and Russia. China has invested billions into military applications for AI while Russia, on the heels of President Vladimir Putin having announced “Whoever becomes the leader in this sphere will become the ruler of the world”, bolstered its annual AI budget with a $2 billion investment from the Russian Direct Investment Fund (RDIF) as of May 2019. Even with this perceived threat, AI scientists have grown increasingly concerned with DoD intentions and uses for artificial intelligence. The DoD has an existing policy on the role of autonomy in weapons, requiring a human to have veto power over any action an autonomous system might take in combat, but it lacks a comprehensive policy on how AI will be used across the vast range of military missions.

Enter the Joint Artificial Intelligence Center (JAIC), led by Lt. General Jack Shanahan. The JAIC is on-boarding an AI ethical advisor to help shape the organization’s approach to incorporating future AI capabilities. This advisor will be relied upon to provide input to the Defense Innovation Board, the National Security Council, and the Office of the Secretary of Defense to address the growing concerns with artificial intelligence and to provide recommendations to the Secretary of Defense in order to bolster trust among the DoD and the general public.

But is it an ethics policy, we as the general public, should be seeking? Since 1948, it has been mandated that all members of the United Nations uphold the Universal Declaration of Human Rights, which broadly mirrors the ethical principles of the Belmont Report, providing for the protection of individual privacy, prohibiting discrimination, and providing other protections of civil liberties. This was followed in 1949 by the Geneva Convention which crafted a legal framework for military activities and operations requiring methods of warfare not causing “superfluous injury or unnecessary suffering” (article 35) and requiring that care must be taken to spare the civilian population (article 57). Perhaps, instead of creating an AI ethics policy, the DoD should be more transparent in the uses of AI and develop a code of conduct focused the utilization of AI and the processes by which this code of conduct will be monitored and adhered. Regardless of a fleshed-out ethics policy or a code of conduct, the reality is that there is a need for clarity on the types of risks posed by military AI applications, and the U.S. military is positioned to lead the way in establishing these confidence-building measures to diminish possible dangers.

The DoD’s Implementation of sound ethical policies and codes of conduct, which are representative of the culture and values of the United States, can aid in bolstering domestic popular support and the legitimacy of military action. Furthermore, it is possible an adherence to ethical action will help develop partnerships with the private sector leveraging technological innovations, attract talented AI engineers, and promote alliances with global allies.

Leave a Reply