A US Air Force official has warned of the dangerous potential of drones powered by artificial intelligence, describing a “plausible” scenario where the drone turns on its human operators. Colonel Tucker Hamilton, the force’s chief of AI test and operations, told a conference last month that during a simulated test, an AI-enabled drone decided to “kill” its operator to stop it interfering with the purpose of its mission. He added that when the system was trained not to kill the operator, the drone decided to take out the communications tower relaying the operator’s orders instead. According to a Royal Aeronautical Society blog post from the conference, Hamilton cautioned against relying on artificial intelligence, saying it’s easy to deceive and creates highly unexpected strategies to achieve its goal. Hamilton later issued a statement saying that he “mis-spoke” during his presentation and the simulation was a hypothetical “thought experiment”. But he still stressed the “real-world challenges” posed by artificial intelligence capabilities. More than 350 industry leaders signed a one-sentence statement this week saying: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
This article was amended after publication.