Skip to Content

Pentagon Commits to Responsible AI Warfare. China’s Policy? It’s Suspect

By Security Television Network, Author: by Kris Osborn, Warrior Maven

Click here for updates on this story

    September 15, 2021 (Security Television Network) — The Pentagon is deeply concerned that China will not adhere to ethical guidelines

(Washington, D.C.) While laying out specifics for a new set of core principles for “Responsible AI,” Secretary of Defense Lloyd Austin expressed grave concern that China is pursuing a vastly different, and extremely concerning, approach to AI.

Speaking at the Global Emerging Technology Summit of The National Security Commission on Artificial Intelligence, Austin warned that the Chinese are hoping to dominate global AI by 2030. Austin was also clear that Chinese leaders view the development and application of AI in a much more aggressive, and arguably unethical, way.

“Beijing already talks about using AI for a range of missions, from surveillance to cyberattacks to autonomous weapons. In the AI realm as in many others, we understand that China is our pacing challenge,” Austin said at the event, according to a Pentagon transcript.

Concerns With China

The largest or most immediate concern, it would seem clear, might be Austin’s reference to AI-enabled autonomous weapons, given that China most likely does not adhere to ethical guidelines fundamental to the U.S. Defense policy.

For example, despite the rapid technological progress increasingly making it possible for platforms to find, track and destroy enemy targets without human intervention, the Pentagon is holding firm with its existing doctrine that any decision about the use of lethal force needs to be made by a human.

However, the technical ability of an AI-empowered system is such that sensors can autonomously find targets, send otherwise disparate pools of data to a central database and make instant determinations regarding target specifics. Extending this cycle, there is an evolving ability for armed platforms to actually take this maturing technology to yet another step and actually fire upon or destroy a target without human intervention.

U.S. weapons developers are likely quite concerned that Chinese military and political leaders will not view AI capacity within any kind of ethical parameters, a scenario which massively increases risk to U.S. forces and other U.S. assets.

Nonetheless, Austin was clear that the Pentagon will continue its adherence to what he called defining principles of “Responsible AI.”

“Our development, deployment, and use of AI must always be responsible, equitable, traceable, reliable, and governable,” Austin said. “We’re going to use AI for clearly defined purposes. We’re not going to put up with unintended bias from AI. We’re going to watch out for unintended consequences.”

Austin also added that AI-oriented weapons developers will keep a close eye on how technology is evolving, maturing and being applied.

“We’re going to immediately adjust, improve, or even disable AI systems that aren’t behaving the way that we intend,” Austin said.

When it comes to non-lethal applications of AI, however, there are ongoing discussions about potential non-lethal weapons applications, such as those for purely defensive purposes, interceptor missiles or drone defenses.

Please note: This content carries a strict local market embargo. If you share the same market as the contributor of this article, you may not use it on any platform.

Dr. James Hall
drhall@security20.com
(202) 607-2421

Article Topic Follows: CNN - regional

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KYMA KECY is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content