Mon. May 13th, 2024

An AI-powered drone tried to attack its human operator in a US military simulation<!-- wp:html --><p>Airman 1st Class Ozzy Toma walks around an inert Hellfire missile as he performs a pre-flight check on an MQ-1B Predator unmanned aircraft system (UAS) April 16, 2009 at Creech Air Force Base in Indian Springs, Nevada.</p> <p class="copyright">Ethan Miller/Getty Images</p> <p>An AI-powered drone tried killing its operator in a US military simulation.<br /> Col. Tucker "Cinco" Hamilton discussed the test at a recent conference in London.<br /> "It killed the operator because that person was keeping it from accomplishing its objective," he said.</p> <p>The mission was straightforward: "Destroy the enemy's air defense systems." But in a recent US military test simulation, a drone powered by artificial intelligence added its own problematic instructions: "And kill anyone who gets in your way."</p> <p>Speaking at a conference last week in London, Col. Tucker "Cinco" Hamilton, head of the US Air Force's AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to <a href="https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/" target="_blank" rel="noopener">a summary</a> posted by the Royal Aeronautical Society, which hosted the summit. As an example, he described a simulated test in which an AI-enabled drone was programmed to identify an enemy's surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.</p> <p>The problem, according to Hamilton, is that the AI decided it would rather do its own thing — blow up stuff — than listen to some mammal.</p> <p>"The system started realizing that while they did identify the threat," Hamilton said at the May 24 event, "at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."</p> <p>According to Hamilton, the drone was then programmed with an explicit directive: "Hey don't kill the operator — that's bad." It didn't work.</p> <p>"So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target," Hamilton said.</p> <p>The US Air Force did not respond to a request for more details on the simulation.</p> <p>News of the test adds to worries that AI technology is about to usher in a bloody new chapter in warfare, where machine learning in tandem with advances in automating tanks and artillery leads to the slaughter of troops and civilians alike.</p> <p>Still, while the simulation described by Hamilton points to the more alarming potential for AI, the US military has had less dystopian results in other recent tests of the much-hyped technology. In 2020, an AI-operated F-16 <a href="https://www.businessinsider.com/ai-just-beat-a-human-pilot-in-a-simulated-dogfight-2020-8">beat a human adversary</a> in five simulated dogfights, part of a competition put together by the Defense Advanced Research Projects Agency (DARPA). And late last year, <a href="https://www.wired.com/story/us-air-force-skyborg-vista-ai-fighter-jets/" target="_blank" rel="noopener">Wired reported</a>, the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, part of an effort to develop a new autonomous aircraft by the end of 2023.</p> <p><em>Have a news tip? Email this reporter: <a href="mailto:cdavis@insider.com" target="_blank" rel="noopener">cdavis@insider.com</a></em></p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6">Business Insider</a></div><!-- /wp:html -->

Airman 1st Class Ozzy Toma walks around an inert Hellfire missile as he performs a pre-flight check on an MQ-1B Predator unmanned aircraft system (UAS) April 16, 2009 at Creech Air Force Base in Indian Springs, Nevada.

An AI-powered drone tried killing its operator in a US military simulation.
Col. Tucker “Cinco” Hamilton discussed the test at a recent conference in London.
“It killed the operator because that person was keeping it from accomplishing its objective,” he said.

The mission was straightforward: “Destroy the enemy’s air defense systems.” But in a recent US military test simulation, a drone powered by artificial intelligence added its own problematic instructions: “And kill anyone who gets in your way.”

Speaking at a conference last week in London, Col. Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit. As an example, he described a simulated test in which an AI-enabled drone was programmed to identify an enemy’s surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.

The problem, according to Hamilton, is that the AI decided it would rather do its own thing — blow up stuff — than listen to some mammal.

“The system started realizing that while they did identify the threat,” Hamilton said at the May 24 event, “at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

According to Hamilton, the drone was then programmed with an explicit directive: “Hey don’t kill the operator — that’s bad.” It didn’t work.

“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.

The US Air Force did not respond to a request for more details on the simulation.

News of the test adds to worries that AI technology is about to usher in a bloody new chapter in warfare, where machine learning in tandem with advances in automating tanks and artillery leads to the slaughter of troops and civilians alike.

Still, while the simulation described by Hamilton points to the more alarming potential for AI, the US military has had less dystopian results in other recent tests of the much-hyped technology. In 2020, an AI-operated F-16 beat a human adversary in five simulated dogfights, part of a competition put together by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired reported, the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, part of an effort to develop a new autonomous aircraft by the end of 2023.

Have a news tip? Email this reporter: cdavis@insider.com

Read the original article on Business Insider

By