The AI ​​War Dilemma in Gaza

Societies fight wars with the same methods with which they generate wealth, wrote in 1980 Alvin and Heidi Toffler, the sociologist-futurologists, as they liked to define themselves, who outlined the contours of the so-called “Third wave”, the age of the machines. And from Gaza, starting from 7 October, a piece of this new era is being shaped, a “year zero” of armed conflicts. Last week, the Israeli journalist Yval Abraham published a long report on the massive use of artificial intelligence in Gaza by Tsahal in the magazine +972. The software used is called Levander and its use, described in detail through the testimonies of six different Israeli officials, mainly affected the first phase of the counteroffensive in the north of the Strip. Abraham talks about how software selects a shortlist of ground targets to hit. The machine learned to do this thanks to man, who with an algorithm trained it to distinguish which is an enemy target from those that are not. This at least in theory, because errors, according to the information revealed by Israeli officials themselves, are tolerated. It is expected that man – a soldier – will take the final responsibility of deciding whether to hit the target identified by the machine. And here is the new element revealed by the investigation: the limited time available to order the attack, just twenty seconds, “just enough time to understand if the target is male”. “Man is left with a minimum amount of time. Consequently, the risk of causing collateral victims is high”, comments Alessandra Russo, who studies the use of AI in armed conflicts at the Cattolica of Milan, to the Foglio. “We are talking about a tolerated margin of error of 10 percent, which is a very high value”. The practice used by Tsahal, explain Israeli officials, consists of waiting for the terrorist identified by the car to return home. “It is easier to hit a target in his private home,” Israeli officials say. In this way, the system called “Where’s daddy?” it ended up neutralizing the target, but also his family, multiplying the victims. According to the investigation, Tsahal reached a tolerance of one hundred civilians killed for each Hamas terrorist hit.

Societies fight wars with the same methods with which they generate wealth, wrote in 1980 Alvin and Heidi Toffler, the sociologist-futurologists, as they liked to define themselves, who outlined the contours of the so-called “Third wave”, the age of the machines. And from Gaza, starting from 7 October, a piece of this new era is being shaped, a “year zero” of armed conflicts. Last week, the Israeli journalist Yval Abraham published a long report on the massive use of artificial intelligence in Gaza by Tsahal in the magazine +972. The software used is called Levander and its use, described in detail through the testimonies of six different Israeli officials, mainly affected the first phase of the counteroffensive in the north of the Strip. Abraham talks about how software selects a shortlist of ground targets to hit. The machine learned to do this thanks to man, who with an algorithm trained it to distinguish which is an enemy target from those that are not. This at least in theory, because errors, according to the information revealed by Israeli officials themselves, are tolerated. It is expected that man – a soldier – will take the final responsibility of deciding whether to hit the target identified by the machine. And here is the new element revealed by the investigation: the limited time available to order the attack, just twenty seconds, “just enough time to understand if the target is male”. “Man is left with a minimum amount of time. Consequently, the risk of causing collateral victims is high”, comments Alessandra Russo, who studies the use of AI in armed conflicts at the Cattolica of Milan, to the Foglio. “We are talking about a tolerated margin of error of 10 percent, which is a very high value”. The practice used by Tsahal, explain Israeli officials, consists of waiting for the terrorist identified by the car to return home. “It is easier to hit a target in his private home,” Israeli officials say. In this way, the system called “Where’s daddy?” it ended up neutralizing the target, but also his family, multiplying the victims. According to the investigation, Tsahal reached a tolerance of one hundred civilians killed for each Hamas terrorist hit.

Read also:

A ruthless technology which, however, is not entirely new. Before Levander, Israel had developed another software, The Gospel, also used in Gaza in recent months, but limited to identifying buildings to demolish. “With Gospel, officials took longer to verify the target,” says Russo. For Andrea Gilli, of the NATO Defense College, “these technologies are designed precisely to minimize collateral damage, identifying the target by selecting the data processed by the machines. The novelty of Levander is that qualitative elements are evaluated to target individuals. It is a forecasting technology, we could say, and therefore opens up new dilemmas. For example: who is a terrorist? Can a child studying in a Hamas school be considered a military target? The point is all here.”

The first significant developments of artificial intelligence applied to war date back to 2017. The first uses took place the following year. “The use of similar software is nothing new, today in the military sector everything is managed by these technologies,” says Mauro Gilli of the Center for Security Studies-ETH in Zurich. The practice of relying on a machine to identify a target to hit is even older. “The first rudimentary targeting techniques in armed conflicts date back to the late 1950s and early 1960s.” For some years, the Americans and especially the Chinese have been investing heavily in the development of artificial intelligence. Beijing is considered much ahead of Washington, which instead applies various caveats that have slowed down the development of these systems. In July last year, a group of AI industry experts and CEOs of tech companies were heard at a hearing in the US Congress. The concept repeated in chorus was one: the United States must speed up its data collection and investment in new technologies if it wants to compete with China. “The country that manages to integrate new technologies more quickly and effectively into the way wars are fought wins – warned Alexandr Wang, very young CEO of the start-up Scale AI – We are at the dawn of a new era”.

Read also:

So far, the Americans have developed the Maven project, recently described in Bloomberg with an interview with the American physicist Will Roper, one of its creators. “For years I didn’t even tell my family what job I did. What was it about? I was designing the war of the future.” The first uses of AI involved Syria and Iraq, which have become a huge “practice” for new targeting technologies. Maven has also been used by the Americans recently, in the bombings carried out in recent weeks on the border between the two countries to hit the positions of pro-Iranian militias. But it’s one thing to use these technologies in the middle of desert areas, but it’s another thing to do it in one of the most densely populated places in the world. “Among us in the industry we have defined Levander as a Maven with anabolics,” says Russo.

The two contrasting paradigms are the most recent American attacks in Afghanistan and those of Israel in Gaza. Since 2002, several raids conducted by the US military have ended up tragically missing their targets, as in the case of the repeated bombings of private homes. What were mistakenly believed to be meetings of Taliban leaders were actually wedding parties. Since then, the Americans have learned important lessons about ground targeting, refining the way they identify targets. The most evident demonstration came in July 2022, in Kabul, with the murder of al Qaeda’s number two, Ayman al Zawahiri. For that operation, the role of intelligence personnel on the ground was crucial, the one carried out by real agents, and then the use of the R9X Hellfire drone, called “blade bomb”, “ninja bomb” or “flying Ginsu”. , designed to minimize any collateral damage. Equipped with six rotating blades, the R9X is a “meat grinder” without an explosive warhead that only hits the target. “That was a case of extreme attention in wanting to reduce collateral damage – explains Mauro Gilli – Now Israel seems to be making other assessments, along the lines of what the Russians did in Groznyj in 1994”.

But according to the researcher, the short time available would not be the real novel element in the use of Levander by the Israelis. “The surreal debate that has taken hold, that of the so-called ‘man in the loop’, the man at the mercy of the machine, has been around for a long time. Any operator who finds himself supervising missile systems has a few seconds to decide. Take for example the activation of defense systems against ballistic missiles.” It is enough to look further east, at what has been happening for months in the Red Sea, to realize that the proliferation of new weapon systems with a high technological level, even among parastatal groups such as Ansar Allah in Yemen, is radically changing the way to fight. In addition to the massive use of small drones, the Houthis were the first in history to have used anti-ship ballistic missiles. “With a Mach-5 speed, capable of traveling at almost 5 thousand kilometers per hour, from the moment of the launch of one of their missiles to the potential impact against an offshore target, be it a commercial vessel or a frigate, the commander of a the military has between 9 and 15 seconds to decide to shoot it down,” American Vice Admiral Brad Cooper, deputy commander of Centcom, the American central command, told the American “60 minutes”.

Read also:

“The big problem with these new technological systems on which AI is based is the risk of cognitive overload for the operator,” explains Russo. “This is also why Americans have greater control over the data collected and ‘chewed up’ by the software. In Maven, the verification time before hitting a target is longer than that taken by the Israelis.” But the question of using AI in choosing targets does not only raise tactical or ethical questions. For Andrea Gilli, another theme is the legal one. “It’s a question of the rules of engagement, which have evidently been lowered,” explains the expert from the NATO Defense College. Again, it is not the weapon itself that should be frightening but the way in which it is used. “According to international humanitarian law, killing civilians is permitted if principles of proportionality exist. Clearly these are values ​​that are not standardized, but if it is true that Levander caused many civilian victims, the intentionality in hitting them should then be demonstrated”. According to many experts, generally speaking of the Palestinians in Gaza as “military targets” of the Israelis would be problematic precisely because of the difficulty with which it is possible to demonstrate intentionality in targeting civilians.

What is certain is that we are facing a new military revolution. Francis G. Hoffman, of the National Defense University in Washington, a few years ago in War on the Rocks had warned against “the naïve and hypocritical conclusions of the minority of techno-moralists”, who blindly oppose any attempt to develop new technologies that “could make more accurate how we can defend our country’s interests.” At the same time, Hoffman warned against the “easy optimisms of techno-enthusiasts,” making clear the need to preserve “man-made oversight.” “These new technologies are changing the way of waging war, there is no doubt about it”, comments Admiral Giampaolo Di Paola, former Defense Minister, to the Foglio. “But there are many unknowns. We could be facing an era similar to that of the advent of the atomic bomb.” Should we be afraid? “All new weapons are fearsome, it’s just a matter of knowing how to adapt.”

 
For Latest Updates Follow us on Google News
 

NEXT Israel – Hamas at war, today’s news live | New York, police raid Columbia University: dozens of pro-Gaza protesters arrested