Photo of a grey jet plane in the sky

USAF Runs AI Drone Simultation that Kills Operative, Then Denies All When it Goes Wrong


The United States Airforce has reportedly run an AI-based drone strike simulation that went rogue and killed its operative for preventing it from achieving its objectives. However, the USAF denies doing so and has changed its statements since the report was first published, possibly in an attempt to avert panic.

According to the article published by Vice/Motherboard:

"A USAF official who was quoted [as] saying the Air Force conducted a simulated test where an AI drone killed its human operator is now saying he “misspoke” and that the Air Force never ran this kind of test, in a computer simulation or otherwise. 

“Col. Tucker Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation”."

Hamilton claims that the USAF has never run such an experiment and has no need to, although they realise that such an outcome is a possibility in a real-world scenario.

"Initially, Hamilton said that an AI-enabled drone "killed" its human operator in a simulation conducted by the U.S. Air Force in order to override a possible "no" order stopping it from completing its mission. Before Hamilton admitted he misspoke, the Royal Aeronautical Society said  Hamilton was describing a "simulated test" that involved an AI-controlled drone getting "points" for killing simulated targets, not a live test in the physical world."

Hamilton's original statement is as below:

“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

A representative for the USAF claims Hamilton's statements were taken out of context and misinterpreted.

Either way, I find this reported activity suspicious, simulation or not, correction/clarification or not. I don't trust AI and I trust the USA even less. Both are potentially dangerous to life and liberty. I will leave it up to you to decide for yourself as to the truth of the situation. What/whom do you trust?

According to Vice, Col. Hamilton is the Operations Commander of the 96th Test Wing of the U.S. Air Force as well as the Chief of AI Test and Operations. The 96th tests a lot of different systems, including AI, cybersecurity, and various medical advances. Hamilton and the 96th previously made headlines for developing Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s, which can help prevent them from crashing into the ground. Hamilton is part of a team that is currently working on making F-16 planes autonomous. In December 2022, the U.S. Department of Defense’s research agency, DARPA, announced that AI could successfully control an F-16

Hamilton went on to say the following:

"We must face a world where AI is already here and transforming our society,” Hamilton said in an interview with Defence IQ Press in 2022. “AI is also very brittle, i.e., it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions.”

With that last part, I do strongly agree. It's no good having AI making decisions if we have no insight into them and, more crucially, controls in place to override them. Otherwise, we could very well end up with harmful (and potentially fatal) outcomes. I also hope we are a long way from seeing the day where we have need for that, if it ever comes.


Thumbnail image: Photo of a grey jet plane by Pixabay on Pexels

How do you rate this article?

6


Great White Snark
Great White Snark

I'm currently seeking fixed employment as a S/W & Web developer (C# & ASP .NET MVC, PHP 8+, Python 3), hoping to stash the farmed fiat and go full Crypto, quit the 07:30-18:00 grind. Unsigned music producer; snarky; white; balding; smashes Patriarchy.


The Snark Returns: Random Musings from The GWS
The Snark Returns: Random Musings from The GWS

SW/Web developer: ~12 years of C# (yay!) & ASP .Net MVC, Java (blargh!), Python (woot!) experience. I'm currently hitting faucets and writing for crypto to stake/invest . | I work part-time with animals. Sadly, my cerebellum and medulla oblongata aren't Einsteinian in proportion. However, I possess a Brobdingnagian vocabulary and get by with being a barbigerous logophile. I can probably write you into bed, if smashing Capitalism and Patriarchy turns you on. Kink is political!

Send a $0.01 microtip in crypto to the author, and earn yourself as you read!

20% to author / 80% to me.
We pay the tips from our rewards pool.