44 views 6 mins 0 comments

AI Technology Powers Wave of Iran War Videos Designed to Generate Revenue

In Technology
March 09, 2026
AI Technology Powers Wave of Iran War Videos Designed to Generate Revenue

AI Technology Fuels Misinformation About the Iran Conflict

Experts speaking to BBC Verify say a massive surge of AI-generated misinformation related to the US-Israel conflict with Iran is spreading online. With easier access to generative AI technology, many online creators are producing and monetizing misleading content about the war.

Researchers found numerous AI-generated videos and manipulated satellite images circulating on social media. These clips present false or exaggerated claims about the conflict and have collectively gathered hundreds of millions of views, highlighting how quickly misinformation can spread online.

Experts Warn About the Rapid Growth of Synthetic War Content

Digital media expert Timothy Graham from the Queensland University of Technology says the scale of AI-driven misinformation has become extremely concerning.

He explains that what once required professional video production teams can now be created within minutes using modern AI tools. As a result, the barrier to producing convincing fake war footage has dramatically dropped, allowing almost anyone to generate realistic-looking conflict videos.

Background of the Escalating Conflict

The tensions escalated when the United States and Israel launched strikes on Iran on 28 February. In response, Iran carried out drone and missile attacks targeting Israel, several Gulf countries, and US military assets in the region.

As the conflict developed rapidly, many people turned to social media platforms to follow updates and understand unfolding events, making these platforms a major source of both information and misinformation.

Social Media Platforms Begin Taking Action

The social media platform X recently announced it will temporarily suspend creators from its monetisation program if they share AI-generated videos of armed conflict without clearly labeling them as synthetic.

The platform’s monetization system rewards creators whose posts generate high engagement such as views, likes, shares, and comments—by providing financial payments. According to Mahsa Alimardani, a researcher at the Oxford Internet Institute, this decision shows that platforms are starting to recognize the seriousness of the issue.

However, companies like TikTok and Meta (owner of Facebook and Instagram) have not yet confirmed whether they will introduce similar measures.

Viral Fake Videos Mislead Millions Online

BBC Verify tracked several examples of widely shared AI-generated videos. One clip appears to show missiles striking Tel Aviv, accompanied by explosion sounds. The video appeared in more than 300 social media posts and was shared tens of thousands of times.

Some users even asked the AI chatbot Grok, integrated into X, to confirm whether the video was real. In many cases, the chatbot mistakenly claimed the video was authentic.

Another viral fake video showed Dubai’s Burj Khalifa seemingly on fire, with crowds running nearby. The footage spread rapidly online and attracted tens of millions of views, causing confusion among residents and tourists already worried about possible attacks.

AI Satellite Images and the Challenge for Platforms

BBC Verify also identified a new trend in the conflict: AI-generated satellite imagery. While genuine footage showed Iranian drone and missile strikes on the US Navy’s Fifth Fleet headquarters in Bahrain, a manipulated image soon appeared online claiming to show severe damage to the base.

The image, shared by the state-linked newspaper The Tehran Times, appeared to be based on real satellite imagery taken in February 2025. Using Google’s SynthID watermark detector, investigators found that the image had likely been generated or modified with a Google AI tool. Small details such as three vehicles parked in identical positions revealed the manipulation.

Exploring the Growth of AI Platforms Like Google Veo and OpenAI’s Sora

Experts say the rapid growth of AI platforms like Google Veo, OpenAI’s Sora, Seedance, and Grok is making it easier than ever to create convincing fake content. According to generative AI specialist Henry Ajder, these tools are now widely available, inexpensive, and simple to use.

Because the process of creating and posting such content can be automated, misinformation spreads quickly. X’s head of product stated that about 99% of accounts sharing these AI videos were attempting to exploit the platform’s monetization system.

How Creators Can Earn Revenue Through X Premium and Impressions

Creators can earn payments once they reach five million impressions within three months and maintain an X Premium subscription. Graham estimates the platform may pay roughly $8 to $12 per million impressions, meaning viral AI content can generate significant revenue.

Despite attempts to improve moderation and detection systems, experts say solving the issue is difficult. The core problem is that engagement-driven monetization often conflicts with the goal of promoting accurate information, and social media platforms have yet to fully resolve this challenge.

See How Technology Is Shaping Global Markets