Observance of the experience in Ukraine - and, to a lesser extent, the Middle East - has increased the perceived need for AI that can outwit jamming and other electronic countermeasures.
Giving weapons autonomous decision-making authority is now possible and enabled by AI. The likelihood that these weapons can be used by autocratic regimes and non-state actors has made these developments more urgent. JL
Eric Lipton reports in the New York Times:
The US, China and a handful of other nations are making rapid progress in developing and deploying new technology that has the potential to reshape warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence. Rapid advances in AI and the intense use of drones in Ukraine and the Middle East have made the issue that much more urgent. Jamming of radio communications and GPS in Ukraine has accelerated the shift, as autonomous drones can keep operating when communications are cut. What is changing is AI that give weapons systems the capability to make decisions themselves after processing information.It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.
But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.
That prospect is so worrying to many other governments that they are trying to focus attention on it with proposals at the United Nations to impose legally binding rules on the use of what militaries call lethal autonomous weapons.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”
But while the U.N. is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive new legally binding restrictions. The United States, Russia, Australia, Israel and others have all argued that no new international law is needed for now, while China wants to define any legal limit so narrowly that it would have little practical effect, arms control advocates say.
The result has been to tie the debate up in a procedural knot with little chance of progress on a legally binding mandate anytime soon.
“We do not see that it is really the right time,” Konstantin Vorontsov, the deputy head of the Russian delegation to the United Nations told diplomats who were packed into a basement conference room recently at the U.N. headquarters in New York.
The debate over the risks of artificial intelligence has drawn new attention in recent days with the battle over control of OpenAI, perhaps the world’s leading A.I. company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology. And last week, officials from China and the United States discussed a related issue: potential limits on the use of A.I. in decisions about deploying nuclear weapons.
Against that backdrop, the question of what limits should be placed on the use of lethal autonomous weapons has taken on new urgency, and for now has come down to whether it is enough for the U.N. simply to adopt nonbinding guidelines, the position supported by the United States.
“The word ‘must’ will be very difficult for our delegation to accept,” Joshua Dorosin, the chief international agreements officer at the State Department, told other negotiators during a debate in May over the language of proposed restrictions.
Mr. Dorosin and members of the U.S. delegation, which includes a representative from the Pentagon, have argued that instead of a new international law, the U.N. should clarify that existing international human rights laws already prohibit nations from using weapons that target civilians or cause a disproportionate amount of harm to them.
But the position being taken by the major powers has only increased the anxiety among smaller nations, who say they are worried that lethal autonomous weapons might become common on the battlefield be“Complacency does not seem to be an option anymore,” Ambassador Khalil Hashmi of Pakistan said during a meeting at U.N. headquarters. “The window of opportunity to act is rapidly diminishing as we prepare for a technological breakout” before there is any agreement on rules for their use.
Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.
The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.
“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.
Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.
Deputy Defense Secretary Kathleen Hicks announced this summer that United States military will “field attritable, autonomous systems at scale of multiple thousands,” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitates that the United States “leverage platforms that are small, smart, cheap and many.”
The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.
What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.
The United States has already adopted voluntary policies that set limits on how artificial intelligence and lethal autonomous weapons will be used, including a Pentagon policy revised this year called “Autonomy in Weapons Systems” and a related State Department “Political Declaration on Responsible Use of Artificial Intelligence and Autonomy,” which it has urged other nations to embrace.
The American policy statements “will enable nations to harness the potential benefits of A.I. systems in the military domain while encouraging steps that avoid irresponsible, destabilizing, and reckless behavior,” said Bonnie Denise Jenkins, a State Department under secretary.
The Pentagon policy prohibits the use of any new autonomous weapon or even the development of them unless they have been approved by top Defense Department officials. Such weapons must be operated in a defined geographic area for limited periods. And if the weapons are controlled by A.I., military personnel must retain “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”
At least initially, human approval will be needed before lethal action is taken, Air Force generals said in interviews.
But Frank Kendall, the Air Force secretary, said in a separate interview that these machines will eventually need to have the power to take lethal action on their own, while remaining under human oversight in how they are deployed.
“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said. He added, “I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”
Thomas X. Hammes, a retired Marine officer who is now a research fellow at the Pentagon’s National Defense University, said in an interview and a recent essay published by the Atlantic Council that it is a “moral imperative that the United States and other democratic nations” build and use autonomous weapons.
He argued that “failing to do so in a major conventional conflict will result in many deaths, both military and civilian, and potentially the loss of the conflict.”
Some arms control advocates and diplomats disagree, arguing that A.I.-controlled lethal weapons that do not have humans authorizing individual strikes will transform the nature of warfighting by eliminating the direct moral role that humans play in decisions about taking a life.
These A.I. weapons will sometimes act in unpredictable ways, and they are likely to make mistakes in identifying targets, like driverless cars that have accidents, these critics say.
The new weapons may also make the use of lethal force more likely during wartime, since the military launching them would not be immediately putting its own soldiers at risk, or they could lead to faster escalation, the opponents have argued.
Arms control groups like the International Committee of the Red Cross and Stop Killer Robots, along with national delegations including Austria, Argentina, New Zealand, Switzerland and Costa Rica, have proposed a variety of limits.
Some would seek to globally ban lethal autonomous weapons that explicitly target humans. Others would require that these weapons remain under “meaningful human control,” and that they must be used in limited areas for specific amounts of time.
Mr. Kmentt, the Austrian diplomat, conceded in an interview that the U.N. has had trouble enforcing existing treaties that set limits on how wars can be waged. But there is still a need to create a new legally binding standard, he said.
“Just because someone will always commit murder, that doesn’t mean that you don’t need legislation to prohibit it,” he said. “What we have at the moment is this whole field is completely unregulated.”
But Mr. Dorosin has repeatedly objected to proposed requirements that the United States considers too ambiguous or is unwilling to accept, such as calling for weapons to be under “meaningful human control.”
The U.S. delegation’s preferred language is “within a responsible human chain of command.”
He said it is important to the United States that the negotiators “avoid vague, overarching terminology.”
Mr. Vorontsov, the Russian diplomat, took the floor after Mr. Dorosin during one of the debates and endorsed the position taken by the United States.
“We understand that for many delegations the priority is human control,” Mr. Vorontsov said. “For the Russian Federation, the priorities are somewhat different.”
The United States, China and Russia have also argued that artificial intelligence and autonomous weapons might bring benefits by reducing civilian casualties and unnecessary physical damage.
“Smart weapons that use computers and autonomous functions to deploy force more precisely and efficiently have been shown to reduce risks of harm to civilians and civilian objects,” the U.S. delegation has argued.
Mr. Kmentt in early November won broad support for a revised plan that asked the U.N. secretary general’s office to assemble a report on lethal autonomous weapons, but it made clear that in deference to the major powers the detailed deliberations on the matter would remain with a U.N. committee in Geneva, where any single nation can effectively block progress or force language to be watered down.
Last week, the Geneva-based committee agreed at the urging of Russia and other major powers to give itself until the end of 2025 to keep studying the topic, one diplomat who participated in the debate said.
“If we wait too long, we are really going to regret it,” Mr. Kmentt said. “As soon enough, it will be cheap, easily available, and it will be everywhere. And people are going to be asking: Why didn’t we act fast enough to try to put limits on it when we had a chance to?”
0 comments:
Post a Comment