X Cracks Down on AI War Content, But Deepfakes Persist in Middle East Conflict
X's AI War Content Crackdown Fails to Stop Deepfake Flood

Elon Musk's social media platform X is intensifying efforts to combat AI-generated war content, but a torrent of deceptive deepfakes depicting the Middle East conflict continues to flood user feeds, overwhelming attempts to distinguish reality from fabrication.

Platform Policy Shift Amid Criticism

In a significant policy reversal, X announced last week that creators who post AI-generated war videos without proper disclosure will face 90-day suspensions from the platform's revenue-sharing program. Subsequent violations will result in permanent removal from monetization opportunities.

"We have allocated an additional $335,000 to X's creators," stated Nikita Bier, X's head of product, in a March 13, 2026 post. "This was funded by users who were suspended from the program for posting AI-generated war videos without disclosure."

This crackdown represents a notable pivot for a platform that has faced mounting criticism for becoming a haven of disinformation since Musk's $44 billion acquisition in October 2022.

Researchers Report Persistent Flood of Fakes

Despite the new enforcement measures, disinformation researchers remain skeptical about their effectiveness in stemming the tide of AI-generated content.

"The feeds I monitor are still flooded with AI-generated content about the war," Joe Bodnar of the Institute for Strategic Dialogue told AFP. "It doesn't seem like creators have been dissuaded from pushing misleading AI-generated images and videos about the conflict."

Bodnar highlighted a particularly concerning example: a premier "blue check" X account eligible for monetization shared an AI-generated clip depicting an Iranian "nuclear-capable" strike on Israel. This deceptive post garnered more views than Bier's announcement about cracking down on AI content.

The Scale of AI-Generated War Content

The ongoing Middle East conflict has unleashed an unprecedented avalanche of AI-generated visuals, dwarfing anything witnessed in previous wars and frequently leaving social media users unable to distinguish fabrication from reality.

These sophisticated deepfakes depict various alarming scenarios including American soldiers captured by Iran, Israeli cities reduced to ruins, and U.S. embassies engulfed in flames. The lifelike quality of these AI-generated videos makes them particularly dangerous vectors for wartime disinformation.

Mixed Reactions to Enforcement Efforts

While some officials have praised X's new approach, questions remain about its implementation and effectiveness. Senior State Department official Sarah Rogers called the policy a "great complement" to X's Community Notes system, suggesting it would result in "less reach (thus monetization)" for inaccurate content.

However, X has not responded to inquiries about how many accounts have actually been demonetized since Bier's announcement, leaving researchers uncertain about the policy's real-world impact.

The Financial Incentive Problem

The persistence of AI-generated war content highlights a fundamental challenge: the financial incentives that encourage creators to produce sensational, engagement-driving material regardless of its authenticity.

Even with the threat of suspension from revenue-sharing programs, the potential for viral success and substantial view counts continues to motivate some accounts to push boundaries with AI-generated content about the conflict.

As the Middle East war continues to unfold, the battle against AI-generated disinformation appears to be entering a new phase, with platforms like X attempting to balance content moderation with creator incentives while researchers warn that technological deception continues to outpace enforcement efforts.