State-Backed Disinformation Networks Embrace AI, But Produce Mostly Low-Impact "Slop"
State-backed propaganda is failing to go viral, but the long-term damage to the information ecosystem is just beginning. Explore the risks.
A fresh analysis from social media research firm Graphika reveals that some of the world's most prolific government-linked influence operations are now deeply reliant on generative artificial intelligence.
Yet rather than crafting sophisticated, undetectable fakes that sway public opinion en masse, these campaigns are overwhelming the internet with vast quantities of amateurish, obviously artificial content.
What the Research Uncovered
Graphika's latest report examines nine active disinformation networks, many of which have been publicly attributed by U.S. officials to state actors in China and Russia. These include longstanding operations like China's "Spamouflage" (also known as Dragonbridge) and Russia's "Doppelganger" campaign.
Across the board, operators have integrated AI tools into nearly every stage of production: generating images and memes, scripting text posts, translating content into multiple languages, and even creating entire video segments featuring synthetic "news anchors" with stiff animations, unnatural voices, and glaring errors.
Volume Over Sophistication: The New Playbook
The real game-changer, according to Graphika senior analyst Dina Sadek, is scale. Where past troll farms required dozens or hundreds of humans to churn out posts, a single person can now prompt AI models to produce thousands of images, articles, or videos in hours.
Fake news sites inadvertently publish the exact AI prompts as headlines, deepfake clips feature mismatched lip-sync or bizarre artifacts, and robotic presenters deliver stilted monologues that scream inauthenticity.
Notable examples include:
- A Russian-linked video titled "Olympics Has Fallen," parodying a Hollywood film with an AI-generated Tom Cruise criticizing the 2024 Paris Games.
- Chinese operations deploying deepfakes of celebrities like Oprah Winfrey or Barack Obama oddly praising India's rising global power.
- YouTube channels run by "journalists" whose faces melt or glitch mid-sentence while pushing divisive narratives.
Limited Reach on Mainstream Platforms
Despite the flood, engagement remains dismal. On Western sites like X (formerly Twitter), Facebook, and YouTube, the overwhelming majority of this material garners zero likes, shares, or meaningful comments. It often stays trapped in small, coordinated networks of fake accounts that boost each other, a pattern unchanged from the pre-AI era.
Sadek notes that these campaigns have never been particularly effective at breaking into organic audiences; AI has simply made failure cheaper and faster.
Why the "Slop" Strategy Still Matters
Even low-visibility content serves a purpose. By polluting the web with high-volume noise, state actors may indirectly influence the training data scraped by large language models. AI chatbots already cite Russian state media (including sanctioned outlets) in responses, and unchecked propaganda could further skew future outputs.
Researchers warn this creates a feedback loop: more slop online means more biased or contaminated training corpora, potentially amplifying authoritarian narratives through seemingly neutral tools down the line.
As generative AI democratizes content creation, the barrier for entry into disinformation has never been lower, raising concerns that non-state actors, domestic extremists, or smaller regimes could soon follow suit with their own automated floods.
Fast Facts
What exactly is "AI slop" in the context of propaganda?
AI slop refers to low-effort, low-quality content churned out by generative tools — think poorly rendered images with extra fingers, robotic-sounding voiceovers, or text riddled with translation errors. In state campaigns, it's used not for flawless deception but for mass production on a budget.
Which countries are most actively using AI for these influence operations?
The Graphika report highlights networks tied to China (e.g., Spamouflage) and Russia (e.g., Doppelganger) as the primary adopters so far. Smaller or emerging operations linked to other actors are beginning to appear, but these two remain the most systematic.
Is this AI propaganda actually fooling or influencing people?
So far, no. On major Western platforms, the content receives negligible organic engagement and rarely escapes echo chambers of fake accounts. Its real risk lies in long-term pollution of the information ecosystem and potential contamination of AI training data rather than immediate viral deception.