In an interview with CNBC journalist David Faber, Elon Musk — one of the world’s richest entrepreneurs, the founder of Tesla and owner of X, formerly known as Twitter — endorses the return of Imam Mahdi, the final leader in Islamic eschatology and the harbinger of the apocalypse.
The video is a deep fake that blends parts of the real interview with fake audio as part of a propaganda campaign aiming to amplify the Islamist apocalyptic narrative.
It’s an example of how an online Islamist network is using artificial intelligence to spread propaganda narratives that the Islamic apocalypse is fast approaching, a recently published study said. It underscores broader concerns about the ways in which generative AI can be abused to disseminate disinformation.
A network of fake accounts has posted and reposted hundreds of TikTok, YouTube and X videos featuring a Pakistani man who “is directly communicating with Prophet Muhammad and God through his prophetic dreams,” said a report published in late August by the Global Network on Extremism and Technology.
(Polygraph.info does not publish the man’s name due to security concerns expressed by the report’s authors.)
The man is promoted as the prophesized last Islamic leader – the Imam Mahdi. The videos threaten that “[a]nyone who does not accept him should expect the deepest depth of hellfire.”
After former Pakistani Prime Minister Imran Khan was arrested in May, “we very quickly found the use of generative AI in promoting a narrative related to this group believing that the apocalypse was coming and that their savior was in the state of Pakistan,” Daniel Siegel, the report’s co-author, told VOA.
The main components of the campaign are twofold: image-based generative AI tools and audio deepfakes. Videos circulated by the network show rapid sequences of AI-generated images lauding the Muslim savior as he engages with prophets and stands in heroic poses.
The network is also using audio deepfakes, which are fake re-creations of human voices, to help legitimize the apocalypse narrative. Some videos have featured deep fakes of prominent Islamic scholars and even former American President Barack Obama.
The network is “trying to use the technology to get attention,” Siegel, a master’s candidate at Columbia University, said.
“The role of AI technologies in extremist propaganda represents the new frontier in the fight against influence campaigns and the rapidly evolving tactics of malicious actors,” the report said.
The unidentified network is also trying to elevate Pakistan’s status in the Muslim world, according to the report.
“It’s putting Pakistan in a position of power, from a religious standpoint, within the global Muslim Ummah,” said Bilva Chandra, the report’s co-author and a fellow at RAND Corporation.
“Ummah” is an Arabic word that refers to the global Muslim community.
The network is “trying to unite Muslims across the globe,” Chandra added. “So there’s a lot of potential for this network to take another direction and to galvanize the support of more Muslims around even something that could be more frightening, more dangerous than the current narrative.”
It isn’t entirely clear which specific AI tools are being used in this campaign, the report’s authors told VOA. These videos are in several languages, including English, Urdu, Bangla, Arabic, Indonesian and Malay.
As with most disinformation campaigns, it’s difficult to measure the efficacy of this particular operation. But data shows that the campaign is flourishing online. The network’s three most commonly used hashtags have over 540 million views on TikTok alone, according to the report.
“It’s getting a lot of reach. It’s getting a lot of views,” said Chandra, who previously worked in product safety for OpenAI’s image-based generative AI tool DALL-E. OpenAI also launched the generative AI chatbot ChatGPT.
After the report’s release, the network has targeted both Chandra and Siegel in various videos, they said.
To Chandra, the campaign she helped identify underscores the need for better detection and moderation strategies when it comes to generative AI.
“There are several different ways in which this can be regulated today,” she said. “It’s really important to keep tabs on all the different ways that these technologies are being exploited by the larger actors, the smaller actors, state actors, non-state actors, and everything in between.”