Responsible use of AI in the military? US publishes declaration outlining principles

 In AI, autonomous systems, Biz & IT, machine learning, Military, U.S. Army, U.S. government, U.S. Military, Weapons

Responsible use of AI in the military? US publishes declaration outlining principles

Serving the Technologist for more than a decade. IT news, reviews, and analysis.
A soldier being attacked by flying 1s and 0s in a green data center.

Enlarge (credit: Getty Images)

On Thursday, the US State Department issued a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” calling for ethical and responsible deployment of AI in military operations among nations that develop them. The document sets out 12 best practices for the development of military AI capabilities and emphasizes human accountability.

The declaration coincides with the US taking part in an international summit on responsible use of military AI in The Hague, Netherlands. Reuters called the conference “the first of its kind.” At the summit, US Under Secretary of State for Arms Control Bonnie Jenkins said, “We invite all states to join us in implementing international norms, as it pertains to military development and use of AI” and autonomous weapons.

In a preamble, the US declaration outlines that an increasing number of countries are developing military AI capabilities that may include the use of autonomous systems. This trend has raised concerns about the potential risks of using such technologies, especially when it comes to complying with international humanitarian law.

Read 6 remaining paragraphs | Comments

12 “best practices” for using AI and autonomous systems emphasize human accountability.

Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt