Explained: Israeli military's use of AI tool 'Lavender' to generate kill lists

Edited By: Moohita Kaur Garg
Gaza Updated: Apr 04, 2024, 02:20 PM(IST)

As per the report, two sources also alleged that the Israeli army, in an unprecedented move, decided during the first weeks of the war. For every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians. Photograph:( Reuters )

Story highlights

Israel, as per a recently unveiled investigation, has been using an AI tool to generate kill lists during the ongoing Israel-Hamas war in Gaza. Here's all we know about the AI-based tool, named 'Lavender'

The Israeli army has been using artificial intelligence (AI) to identify targets for its bombing spree in Gaza, revealed a report on Wednesday (Apr 3).

According to an investigation by +972 Magazine and Local Call, the Israeli AI-based tool, named "Lavender," had a 10 per cent error rate.

Who developed Lavender?

As per the investigation, 'Lavender' was developed by Unit 8200, the Israel Defense Forces' elite intelligence division.

Unit 8200, the Israeli Intelligence Corps unit of the Israel Defense Forces responsible for clandestine operations, is comparable to the United States's National Security Agency or GCHQ in the United Kingdom.

Also read | Israel using 'unparalleled' AI to list out suspected Hamas targets: Report

How does Lavender work?

The Lavender system is said to mark suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ) as potential bombing targets, including low-ranking individuals. 

The software analyses data collected through mass surveillance on most of Gaza's 2.3 million residents, assessing and ranking the likelihood of each person's involvement in the military wing of Hamas or PIJ.

Individuals are given a rating of 1 to 100, indicating their likelihood of being a militant.

Lavender learns to identify characteristics of known operatives from Hamas and PIJ and then identifies similar characteristics among the general population. Individuals with multiple incriminating features are assigned higher ratings, making them potential targets for assassination.

How accurate is it?

As per the report, even though the AI machine has an error rate of 10 per cent, its outputs were treated "as if it were a human decision".

Citing six Israeli intelligence officials involved in the alleged programme, the investigation revealed that during the initial stages of the conflict in Gaza, the military heavily relied on Lavender. It identified as many as 37,000 (most of them junior) Palestinians as suspected militants for potential airstrikes and assassination.

Also read | Israel targeted aid convoy 'systematically, car by car' and killed his staff members, alleges WCK founder

These officials said that the outputs of the AI machine were treated "as if it were a human decision," and alleged that the human review of the suggested targets was cursory at best.

Reportedly, the approval "to automatically adopt Lavender's kill lists" was given about two weeks into the war. Before that, they were only treated as an "auxiliary tool".

This approval was granted after an intelligence officer manually checked the accuracy of a random sample of several hundred targets selected by the AI system and found that Lavender's results had 90 per cent accuracy in identifying targets.

"From that moment, sources said that if Lavender decided an individual was a militant in Hamas, they were essentially asked to treat that as an order, with no requirement to independently check why the machine made that choice or to examine the raw intelligence data on which it is based," reports the publication.

A senior official identified as 'B' confirmed that officers were not required to review the AI system's assessments "in order to save time and enable the mass production of human targets without hindrances".

"Everything was statistical, everything was neat — it was very dry," he noted.

"In war, there is no time to incriminate every target. So you’re willing to take the margin of error of using artificial intelligence, risking collateral damage and civilians dying, and risking attacking by mistake, and to live with it," said another officer.

Civilian casualties and 'collateral damage'

As per the report, two sources also alleged that the Israeli army, in an unprecedented move, decided during the first weeks of the war that for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians. Before that, the military did not authorise any "collateral damage" during assassinations of low-ranking militants. 

For targets that were a senior Hamas official with the rank of battalion or brigade commander, the army reportedly on several occasions authorised the killing of more than 100 civilians per assassination.

The report alleges that to assassinate the junior militants, the Israeli army preferred to only use "dumb" bombs. 

Watch: Joint World Bank-UN report pegs Gaza's damages at $18.5 billion 

Dumb bombs are unguided missiles that can destroy entire buildings on top of their occupants and cause significant casualties. These were used instead of 'smart' precision bombs because "you don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs]," an intelligence officer identified as 'C' told the publication.

Another source told +972 magazine that they had personally authorised the bombing of "hundreds" of private homes of alleged junior operatives that the AI tool Lavender had marked. These attacks killed many civilians, ending entire families, all labelled as mere "collateral damage".

One source said that they nicknamed low-ranking militants on the AI kill lists 'garbage targets,' but said these were "more ethical than the targets that we bombed just for 'deterrence' — highrises that are evacuated and toppled just to cause destruction."

(With inputs from agencies)

Read in App