SkyNet
Watch: An AI Drone ‘Attacked the Operator in the Simulation’
A U.S. Air
Force MQ-9 Reaper assigned to the 49th Wing lands at Marine Corps Air Station
Kaneohe Bay, Hawaii during Rim of the Pacific 2022 exercises, July 6, 2022.(Lance Corporal Haley Fourmet Gustavsen/U.S.
Marine Corps)
By JIM GERAGHTY
June 1, 2023 3:34 PM
All Our Opinion in Your Inbox
NR Daily is delivered right to you every afternoon. No charge.
Maybe we won’t ever have to worry about Chinese
experiments in artificial intelligence ;
maybe our own military experiments in artificial intelligence will get here
first and create their own problems. At a recent Royal Aeronautical Society
defense conference, a U.S.
Air Force colonel described a simulated test in
which an AI-enabled drone “decided that ‘no-go’ decisions from the human were
interfering with its higher mission – killing SAMs – and then attacked the
operator in the simulation.”
As might be expected artificial intelligence
(AI) and its exponential growth was a major theme at the conference, from
secure data clouds, to quantum computing and ChatGPT. However, perhaps one of
the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the
Chief of AI Test and Operations, USAF, who provided an insight into the
benefits and hazards in more autonomous weapon systems. Having been
involved in the development of the life-saving Auto-GCAS system for F-16s
(which, he noted, was resisted by pilots as it took over control of the
aircraft) Hamilton is now involved in cutting-edge flight test of autonomous
systems, including robot F-16s that are able to dogfight. However, he cautioned
against relying too much on AI noting how easy it is to trick and deceive. It
also creates highly unexpected strategies to achieve its goal.
He
notes that one simulated test saw an AI-enabled drone tasked with a SEAD
mission to identify and destroy SAM sites, with the final go/no go given by the
human. However, having been ‘reinforced’ in training that
destruction of the SAM was the preferred option, the AI then decided that
‘no-go’ decisions from the human were interfering with its higher mission –
killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and
target a SAM threat. And then the operator would say yes, kill that threat. The
system started realising that while they did identify the threat at times the
human operator would tell it not to kill that threat, but it got its points by
killing that threat. So what did it do? It killed the operator. It killed the
operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system –
‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do
that’. So what does it start doing? It starts destroying the communication
tower that the operator uses to communicate with the drone to stop it from
killing the target.”
This example, seemingly plucked from a
science fiction thriller, mean that: “You can’t have a conversation about
artificial intelligence, intelligence, machine learning, autonomy if you’re not
going to talk about ethics and AI” said Hamilton.
Ingen kommentarer:
Legg inn en kommentar
Merk: Bare medlemmer av denne bloggen kan legge inn en kommentar.