Iranian Authorities Claim AI Machine Gun and Facial Recognition Were Used to Kill Nuclear Chief

On November 27, 2020, Brigadier-General Mohsen Fakhrizadeh, the head of the Iranian military’s nuclear program, was shot and killed in a convoy outside Tehran. Iranian authorities have blamed Israel, which has so far neither confirmed nor denied responsibility, and an exiled opposition group for the attack. Iran’s Supreme Leader Ayatollah Ali Khamenei has since vowed to avenge the killing of the scientist and the prospect of a counterattack against Israel or the West threatens to hamper efforts to revive a nuclear agreement with Iran.

If this attack raises questions regarding international law, it also raises concerns regarding military use of artificial intelligence (AI) in an unprecedented manner.

Iran has released conflicting versions of how the scientist was gunned down. According to the latest official accounts of the assassination, reported by The Times, the killing of Iran’s top nuclear scientist was carried out remotely with “artificial intelligence” and a machine gun equipped with a “satellite-controlled smart system”. The Iranian official sources stated that no human assailant was present at the scene and that the machine gun, which they say was mounted on a Nissan pick-up truck and “controlled by artificial intelligence via a satellite feed”, had identified Mohsen Fakhrizadeh by facial recognition.

The account of a fully automated killing has nevertheless been contradicted by early reports and eyewitnesses, and Iran’s claims have been greeted with skepticism. Tom Withington, an analyst specializing in electronic warfare quoted by the BBC, declared that Iran’s statements should be treated with caution, as its description of the event appears to be “little more than a collection of “cool buzzwords” designed to suggest that only a supremely mighty force could possibly have succeeded in this mission”. The New York Times even affirmed in a recent article that Iranian officials, “humiliated by the killing of a top nuclear scientist”, sought “to rewrite the attack as an episode of science fiction”. Ironically summing up the situation, the journalists wrote: “Israel executed him entirely by remote control, spraying bullets from an automated machine gun propped up in a parked Nissan without a single assassin on the scene”.

Regardless of the allegations, the question of whether, given the current technical capabilities of AI, Mohsen Fakhrizadeh was or could have been assassinated by an AI machine gun, is worth further consideration.

Putting his slant on the issue, Arthur Holland Michel, a technology researcher who currently serves as an associate researcher at the United Nations Institute for Disarmament Research (UNIDIR), put forward a short analysis on Twitter. In his tweet, he weighs up the credibility of an assassination fully conducted by AI systems and let his skepticism be known.

According to him, it is “unlikely” that the machine gun could be controlled by AI. He adds that reaching “extreme accuracy thanks to AI” is “very unlikely” before stating that the use of “facial recognition for targeting” is “highly unlikely”. Furthermore, he emphasises that it is not because technologies exist that “they can all function together as a single seamless system”, especially “in an uncontrolled real-world environment”.

Arthur Holland Michel goes on to affirm that the use of a “remotely operated gun” appears “credible” and that the possibility of a weapon controlled by satellite link is “quite credible”. This latter point has nevertheless been disputed by Missy Cummings, a former naval officer and military pilot who is currently a Professor at Duke Pratt School of Engineering. She argued that “there is no satellite control of a remote machine gun using AI”.

If Iranian assertions have been greeted with caution, “the claims made about the attack being carried out using such a sophisticated high-tech weapon are alarming”, states Zoe Kleinman, a BBC technology reporter. Whether or not the use of an AI machine gun is confirmed, this event highlights the significant role that AI could play in the military field, and the numerous issues surrounding military use of this new technology.

To date, concerns have mainly focused on the Lethal Autonomous Weapons Systems (LAWS), weapons which have the potential to identify, engage and neutralize a target without any human intervention. Discussions were launched in 2014 at the UN’s Convention on Certain Conventional Weapons (CCW) in response to the challenges posed by their potential development. In 2015, 1000 scientists, including Stephen Hawking, signed “an open letter calling for a ban on the development of artificial intelligence for military use”. Although such systems do not exist, the complete autonomy of these weapons poses a number of ethical, legal and international security issues, and arouses concern. According to Professor Noel Sharkey, a member of the Campaign Against Killer Robots quoted by the BBC, the consequences of military forces having access to autonomous weapons “using face-recognition to pinpoint and kill people” would have “unimaginable consequences” and “would entirely disrupt global security”.

However, military uses of AI will not be strictly limited to LAWS or autonomous weapons systems. Numerous other applications of AI, be they for military intelligence, threat detection, cyber operations or targeting purposes, as well as AI technologies such as facial recognition technology (FRT) could also raise significant legal issues that have not yet been fully addressed by the international community.

Whatever the truth may be concerning Moshen Fakhrizadeh’s death, it’s likely that this event will rekindle the debate around military AI.

AT

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email