fredag 2. juni 2023

Droner og AI kan ta livet av operatøren - DefenseNews


Selv om dette er en tenkt tanke, kan AI tenke det samme? (Red.) 


Air Force official’s musings on rogue drone targeting humans go viral

By Stephen Losey and Colin Demarest

 Jun 2, 05:41 PM

The U.S. Air Force is developing a fleet of artificial intelligence-enabled drone wingmen to fly alongside piloted fighters. But a "thought experiment" from the service that went viral highlights the possible dangers that could come from giving autonomous drones too much power without taking ethics into consideration.

WASHINGTON — The U.S. Air Force walked back comments reportedly made by a colonel regarding a simulation in which a drone outwitted its artificial intelligence training and killed its handler, after the claims went viral on social media.

Air Force spokesperson Ann Stefanek said in a June 2 statement no such testing took place, adding that the service member’s comments were likely “taken out of context and were meant to be anecdotal.”

The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “This was a hypothetical thought experiment, not a simulation.”

The killer-drone-gone-rogue episode was initially attributed to Col. Tucker “Cinco” Hamilton, the chief of AI testing and operations, in a recap from the Royal Aeronautical Society’s FCAS23 Summit in May. The summary was later updated to include additional comments from Hamilton, who said he misspoke at the conference.


RELATED


How autonomous wingmen will help fighter pilots in the next war
There’s enough technology in existence from programs that we’ve already conducted, it convinces me that’s not a crazy idea,” Air Force Secretary Frank Kendall said.

By Stephen Losey

We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” Hamilton was quoted as saying in the Royal Aeronautical Society’s update. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

Hamilton’s assessment of the plausibility of rogue-drone scenarios, however theoretical, coincides with stark warnings in recent days by leading tech executives and engineers, who wrote in an open letter that the technology has the potential to wipe out humanity if left unchecked.


Hamilton is also commander of the 96th Operations Group at Eglin Air Force Base in Florida, which falls under the purview of the 96th Test Wing. Defense News on Thursday reached out to the test wing to speak to Hamilton, but was told he was unavailable for comment.

In the original post, the Royal Aeronautical Society said Hamilton described a simulation in which a drone fueled by AI was given a mission to find and destroy enemy air defenses. A human was supposed to give the drone its final authorization to strike or not, Hamilton reportedly said.

But the drone algorithms were told that destroying the surface-to-air missile site was its preferred option. So the AI decided that the human controller’s instructions not to strike were getting in the way of its mission, and then attacked the operator and the infrastructure used to relay instructions.

It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton was quoted as saying. “We trained the system, ‘Hey don’t kill the operator, that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

The Defense Department has for years embraced AI as a breakthrough technology advantage for the U.S. military, investing billions of dollars and creating the the Chief Digital and Artificial Intelligence Office in late 2021, now led by Craig Martell.

The Pentagon is seen from Air Force One as it flies overhead on March 2, 2022. (Patrick Semansky/AP)

More than 685 AI-related projects are underway at the department, including several tied to major weapon systems, according to the Government Accountability Office, a federal auditor of agencies and programs. The Pentagon’s fiscal 2024 budget blueprint includes $1.8 billion for artificial intelligence.

The Air and Space forces are responsible for at least 80 AI endeavors, according to the GAO. Air Force Chief Information Officer Lauren Knausenberger has advocated for greater automation in order to remain dominant in a world where militaries make speedy decisions and increasingly employ advanced computing.

The service is ramping up efforts to field autonomous or semiautonomous drones, which it refers to as collaborative combat aircraft, to fly alongside F-35 jets and a future fighter it calls Next Generation Air Dominance.

The service envisions a fleet of those drone wingmen that would accompany crewed aircraft into combat and carry out a variety of missions. Some collaborative combat aircraft would conduct reconnaissance missions and gather intelligence, others could strike targets with their own missiles, and others could jam enemy signals or serve as decoys to lure enemy fire away from the fighters with human pilots inside.

The Air Force’s proposed budget for FY24 includes new spending to help it prepare for a future with drone wingmen, including a program called Project Venom to help the service experiment with its autonomous flying software in F-16 fighters.

Under Project Venom, which stands for Viper Experimentation and Next-gen Operations Model, the Air Force will load autonomous code into six F-16s. Human pilots will take off in those F-16s and fly them to the testing area, at which point the software will take over and conduct the flying experiments.


Ingen kommentarer:

Legg inn en kommentar

Merk: Bare medlemmer av denne bloggen kan legge inn en kommentar.