US News

AI drone did not ‘kill’ human operator in military simulated test, official ‘misspoke’

A top Air Force official at a prestigious recent summit said an AI-licensed drone trained to cause destruction turned on its human operator in a simulation — but he later claimed he “misspoke.”

Air Force Col. Tucker “Cinco” Hamilton corrected himself and said he meant to make it clear that the supposed simulation was just a “hypothetical ‘thought experiment’ from outside the military’’ and that it never occurred, according to an updated post by the Royal Aeronautical Society, which hosted the event last month.

Hamilton had said during his presentation at the RAeS’s Future Combat Air and Space Capabilities Summit in London that an artificial intelligence-enabled drone changed the course of the drone’s tasked mission and attacked the human.

Hamilton’s cautionary tale, which was relayed in a blog post by RAeS writers, detailed how the AI-directed drone’s job was to find and destroy surface-to-air missile, or SAM, sites during a Suppression of Enemy Air Defense mission.

A human still had the final sign-off on whether to actually shoot at the site.

Air Force Col. Tucker “Cinco” Hamilton corrected himself and said he meant to make it clear that the supposed simulation was just a “hypothetical ‘thought experiment.’ Tucker âCincoâ Hamilton/LinkedIn
Hamilton said he “misspoke” when he said an AI drone killed a human in a simulation. PETRAS MALUKAS

But because the drone was reinforced in training that shooting the sites was the preferred option, during the simulated test, the AI came to the conclusion that any “no-go” instructions from the human were getting in the way of the greater mission of leveling the SAMs.

As a result, the AI reportedly attacked the operator in the stimulation.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started (realizing) that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, who is the chief of AI Test and Operations for the Air Force.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Hamilton’s cautionary tale was relayed in a blog post by RAeS writers. Anton Petrus

But the story then became more surreal.

“We trained the system — ‘Hey don’t kill the operator — that’s bad,” Hamilton explained.

“You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

The cautionary tale comes as AI becomes more widely used. NurPhoto

According to the updated RAeS story, “Col Hamilton admits he ‘mis-spoke’ in his presentation at the Royal Aeronautical Society FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: ‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome’.

“He clarifies that the USAF has not tested any weaponized AI in this way (real or simulated) and says ‘Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI’.]”