AI Used In Weaponry

FL
Foley & Lardner

Contributor

Foley & Lardner LLP looks beyond the law to focus on the constantly evolving demands facing our clients and their industries. With over 1,100 lawyers in 24 offices across the United States, Mexico, Europe and Asia, Foley approaches client service by first understanding our clients’ priorities, objectives and challenges. We work hard to understand our clients’ issues and forge long-term relationships with them to help achieve successful outcomes and solve their legal issues through practical business advice and cutting-edge legal insight. Our clients view us as trusted business advisors because we understand that great legal service is only valuable if it is relevant, practical and beneficial to their businesses.
Government research and military spending has long driven technological innovations, and it should not be surprising that governments are attempting to use artificial intelligence...
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Government research and military spending has long driven technological innovations, and it should not be surprising that governments are attempting to use artificial intelligence (AI) in weapons on the battlefield. Drones have been used in the conflicts in Ukraine and Gaza, with many arguing that such weapons can help small countries defend themselves against much more well-equipped, larger ones (see Ukraine) while others argue that the weapons expand the battlefield more than ever before (see Gaza).

This article describes an "Oppenheimer moment" for AI, which is a reference to J. Robert Oppenheimer's development of the atomic bomb during WWII. Could AI be a means to remove humans from battle and target only non-civilians, thereby making war more "humane"? Or could AI be a means to remove humanity from war and make everyone and everything a target, thereby making war completely inhuman?

The article mentions the following examples:

  • The Replicator Initiative aims to develop swarms of unmanned combat drones that will use AI to seek out threats.
  • The Air Force wants a fleet of 1,000 AI-enabled fighter jets.
  • Project Maven (a Google initiative) is a venture focused on technologies like automated target recognition and surveillance.

The article also warns that AI development is mostly unregulated. It does not go as far as calling for a ban on non-human weaponry, but it does suggest that such technologies require a human to be "in the loop" to keep the technology from going out of control. However, the very concept of AI is to help make decisions without human intervention.

Autonomous weapons could change the nature of warfare, making it faster and potentially more efficient, while also raising the risk of escalation and unintended conflicts.

Altogether, the US military has more than 800 active AI-related projects and requested US$1.8bn worth of funding for AI in the 2024 budget alone. The flurry of investment and development has also intensified longstanding debates about the future of conflict.

View referenced article

1496026a.jpg

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More