A Blog by Jonathan Low

 

Jan 29, 2024

The Reason AI Is Already Playing An Important Role In the Ukraine War

The war in Ukraine has confirmed the growing dominance of drones - in the air, at (and under) the sea - and, increasingly, on land. 

All are controlled to various degrees by AI wielded by remote human operators. But soon the operators may become even more remote and, possibly, replaced themselves by computers. JL 

Phillips O'Brien reports in his substack:

AI is already appearing on the battlefield of Ukraine. I see only a very limited role for crewed aircraft in the future of war. Having humans in an aircraft in a war adds enormous pressure to decision-making. Humans are not only controlling the aircraft, they are targets. They need to keep their own lives protected, and that limits the time and place in which they can operate. Looking at the Russo-Ukraine War, the prospect of losing pilots on both sides is so high that fixed-wing crewed aircraft are being pushed further and further away from the battlefield. If you don’t have a human in a plane it can be smaller, less expensive, stay in the air much longer and operate more flexible.

OK—once again I lost control. I started writing what was supposed to be the completion of the AI piece—with some thoughts on how AI might work through the rest of the Ukraine War. However two things intervened. My discussion of AI and upcoming US developments became long enough as it was—so the end of the Ukraine War element will have to wait. I also had an itch I wanted to scratch by writing a short analysis of the the GOP primary result in New Hampshire (I actually don’t think it was great for Trump). That’s at the end—feel free to disregard.

At least you cant complain about a lack of content with this substack!

AI had been a point of discussion for decades before the Russian full-scale invasion. The debate was to a large degree one of control. Should the ability to decide what, when and how to attack a target always be the preserve of humans—or should it be handled over to AI.

People from my generation probably remember the Matthew Broderick/Ally Sheedy film War Games, which came out in 1983 was decidedly anti-AI. The plot saw the US, worried that human crews would not fire nuclear weapons if war came with the USSR, setting up an AI system called WOPR (War Operation Plan Response) which could decide on its own when to launch. Needless to say the world almost blows up before our human heroes intervene to save the day.

War Games: 1983. At this point the AI in control of the US strategic nuclear arsenal would have had the computing power of a kitchen appliance today.

In many ways the debate before Feb 24, 2022 about AI and the control of weapons was the exact same one that played out in War Games. Is it ethical to allow AI to decide when to attack, will it lead to greater errors, war crimes, etc?1 If you want to read a pretty comprehensive overview of the arguments against giving AI control over weapons, you could read this Bulletin of Atomic Scientists article (free online) entitled: Giving an AI control of nuclear weapons: What could possibly go wrong?2 Its worth noting that it was published in early February 2022—just as the Russian army was gearing up to cross the border. Here is a brief excerpt—basically AI cant be trusted.

How autonomous nuclear weapons could go wrong. The huge problem with autonomous nuclear weapons, and really all autonomous weapons, is error. Machine learning-based artificial intelligences—the current AI vogue—rely on large amounts of data to perform a task. Google’s AlphaGo program beat the world’s greatest human go players, experts at the ancient Chinese game that’s even more complex than chess, by playing millions of games against itself to learn the game. For a constrained game like Go, that worked well. But in the real world, data may be biased or incomplete in all sorts of ways. For example, one hiring algorithm concluded being named Jared and playing high school lacrosse was the most reliable indicator of job performance, probably because it picked up on human biases in the data.

These kinds of arguments were widespread, and even regularly made in the Pentagon and MOD’s. Its why the US DOD was always keen to stress that it would keep a human in all decision making loops.3

It seems like such a quaint discussion now. War tends to blow away past worries with its inevitable appetite to destroy the other side. We’ve seen it already with the ease of use of land mines, cluster munitions, etc—all systems which were debated, even called illegal before the full-scale invasion. Now both are ubiquitous on the battlefield.

Really pleased that people enjoyed the last piece on how AI is already appearing on the battlefield of Ukraine. This next instalment of the series is going to be highly speculative (apologies) and discuss where AI might be going in defense terms. Ive been struck in particular by some US discussions, particularly surrounding the Replicator Initiative.

DARPA
Artists vision of a US drone swarm operating with crewed aircraft: https://breakingdefense.com/2023/09/for-replicator-to-work-the-pentagon-needs-to-directly-help-with-production/

Btw, here is a link to the first segment.

I’m going to start this with an admission—that many people in western air forces, whom I work with and respect, hate. I see only a very limited role for crewed aircraft in the future of war. Its something that I’ve discussed a number of times with Air Force people (usually pilots). The military benefits on uncrewed aircraft seem to me so substantial that its inevitable that they will take over. Those benefits are:

 

Greater Flying Advantages. If you don’t have a human in a plane it can be smaller, less expensive, stay in the air much longer and operate more flexible. Basically providing all the very expensive and frankly large systems needed to keep a human alive onboard an aircraft is limiting. Take those out and the plane can fly for longer, can make maneuvers that would tax the human body, can have better shape, be smaller, etc.


  1. Greater Decision-Making Advantages. Having humans in an aircraft in a war environment actually adds enormous pressure to the decision-making process. The human (or humans) are not only controlling the aircraft, they are targets themselves. As such, they need to keep their own lives protected, and that limits the time and place in which they can operate. Looking at the Russo-Ukraine War, the prospect of losing pilots on both sides is so high, that fixed-wing crewed aircraft are being pushed progressively further and further away from the battlefield. Without the pressure of keeping the crew alive, the decision making pressure to act too quickly would be lessened.

  2. Greater Cost Advantages. I have some air force friends who swear that this is not yet true, but my guess is that it will be and sooner rather than later. This war has revealed that mass matters (during the War on Terror, the US advantage in technology was so great that perhaps this was hidden). If you can build aircraft without the need to house humans, that will end up being a multiple cost savings. You can save money on the construction of the weapons, and on different support systems (such as the need to send large numbers of pilots through advanced training). In a war between the US and China, for instance, equipment destruction would be massive, and pilot losses huge. The side that could produce the mass of advanced aircraft without the huge need for pilots would have a major advantage.

So, uncrewed aircraft have in my mind such massive advantages in terms of operational ability and cost that they will take over (eventually). The time line will really come down to one thing—systems of control. If a nation starts to rely on uncrewed aircraft overwhelmingly, it will need to have a robust and difficult to break system to control those aircraft (or for those aircraft to control themselves). You might see now where this is heading.

0 comments:

Post a Comment