The Deepfake Dividend: 2026 Is The Year The Evidence Stops Working
Why AI is changing online content and what we can believe...
BOTTOM LINE UP FRONT
A free smartphone app can now clone any human voice from three seconds of audio, and the American institutions built to catch the consequences have spent the last twelve months being quietly dismantled.
State adversaries, principally Russia, China, and Iran, are deploying synthetic media tools at scale across elections in Europe, the United States, and beyond. CISA (the US Cybersecurity and Infrastructure Security Agency, responsible for election security coordination) has lost more than 1,000 staff since February 2025. Platform content moderation is declining. Detection technology is losing the arms race. The window in which synthetic media could be reliably distinguished from authentic content by automated tools has effectively closed.
Overall assessment: SEVERE on a five-point scale (LOW / MODERATE / SUBSTANTIAL / SEVERE / CRITICAL). That rating reflects capability, intent, and structural vulnerability, not proven electoral impact, which remains difficult to establish and is addressed honestly in the counter-argument section below.
The threat is not only in the content being produced. It is in the tools being used to produce and consume it. DeepSeek, the Chinese AI assistant that topped the US App Store in early 2025, is trained under Chinese government content guidelines. TikTok’s recommendation algorithm is controlled by a company with legal obligations to the Chinese state. Millions of Western professionals are using both tools for research, analysis, and decision support, unaware that the framing of their outputs has been shaped before they see it. Checking what you read is no longer enough. You must also check what you are reading it with.
Boards should establish dual-channel verification for financial instructions arriving by video or voice, identify their EU AI Act Article 50 compliance owner before June, and confirm with their broker whether cyber insurance covers deepfake-enabled fraud. The full commercial action checklist is in Section 7.
Individuals should agree a family codeword for emergency calls. A three-second voice sample is enough to clone someone’s voice convincingly. The same technology behind corporate fraud is being used against families: criminals clone a child or grandchild’s voice, call a parent claiming an emergency, and request an urgent transfer. If the caller does not use the codeword, hang up and call back on a number you already have.
SECTION 1: WHAT AI DISINFORMATION CAN DO TODAY
Confidence level: CONFIRMED across this section
1.1 Deepfake Video and Audio
The technical tells that once allowed trained observers to spot a fake have been largely eliminated. Real-time face-swapping in video calls runs on consumer hardware. Text-to-video models generate photorealistic footage from a written prompt at near-zero cost. OpenAI’s Sora, widely cited as a benchmark, is being discontinued in April 2026 due to compute costs and IP liability, but its successors (Runway Gen-4, Google Veo 2, and OpenAI’s next-generation model Spud) are already in market and more capable. Deepfake videos are estimated at around 8 million online by 2025, up from 500,000 in 2023 (single commercial source; treat as indicative).
Voice cloning has crossed the indistinguishable threshold. A few seconds of sample audio suffices to clone natural intonation, emotion, and breathing. The 2024 New Hampshire primary saw a deepfake Biden call telling Democrats not to vote. The barrier to entry is a free app.
Real-world example (CONFIRMED): In Ireland’s 2025 presidential election, a deepfake video falsely depicted the eventual winner (Catherine Connolly) withdrawing her candidacy days before polling. It was detected and debunked, but revealed the operational concept in action: a well-timed synthetic video can inject chaos at the exact moment when correction is hardest. The candidate had to publicly prove he was still running. That is the objective, not the deception itself.
1.3 AI-Generated Text, Bot Networks, and Synthetic Consensus
Large language models produce text indistinguishable from human writing across multiple languages. A single operator can generate thousands of contextually appropriate posts, articles, and comments per day. AI has simultaneously transformed bot networks from crude spam into sophisticated simulations of authentic human behaviour: modern bots maintain consistent personas over months, engage dynamically with current events, and operate across platforms simultaneously. What previously required a team of dozens can now be run by one person.
The combination produces the most dangerous emerging threat in this space: synthetic consensus. Rather than trying to deceive people with individual fakes, AI swarms manufacture the illusion that millions of people already agree with a narrative, flooding platforms until some content inevitably goes viral, creating the appearance of organic public opinion where none exists. Research published in Science (January 2026) confirmed that these swarms can maintain persistent identities and memory, coordinate autonomously, and adapt to human responses in real time. No proven defensive framework currently exists for this threat vector.
OpenAI’s own threat reports confirm Russia, China, and Iran have used its models to generate social media content, translate articles, create headlines, and reformat news for platform-specific distribution.
1.4 Key Judgement on Capability
The most significant shift since 2020 is the collapse of the cost and skill barrier. A convincing deepfake once required technical expertise and significant computing power. In 2026, it requires a smartphone and a free app. State actors no longer need dedicated content creation teams. They need strategy, distribution networks, and operational security. The content creation problem is solved.
SECTION 2: STATE ACTORS: WHO IS DOING WHAT
Confidence level: CONFIRMED (High) across this section
2.1 Russia
Russia operates the most sophisticated, best-resourced, and most operationally proven AI disinformation infrastructure of any state actor.
Storm-1516 and Doppelganger are the codenames used by Microsoft and EU research bodies for Kremlin-linked disinformation networks, industrialised operations generating and distributing synthetic content across multiple languages and platforms simultaneously. Storm-1516 is orchestrated in part by John Mark Dougan, an American-born propagandist based in Moscow since 2016.
Scale: Over 171 fake news sites, 32 documented false narratives, generating over 67 million views across 16 languages.
Specific campaigns (CONFIRMED):
Target Date Operation Impact France Dec 2025 onwards 200+ fake news sites mimicking French outlets, targeting 2026 municipal elections Active. Over 140 sites imitate well-known French outlets Germany Feb 2025 election AI-generated deepfakes and 102 fake news sites targeting German politicians Detected and partially mitigated France Dec 2024 to Mar 2025 Five false narratives, 38,877 social media posts, 55.8 million views Widespread reach; debunking lagged dissemination Moldova 2025 elections AI-driven disinformation targeting the electoral process Detected by researchers, partially disrupted
Fabrication examples (CONFIRMED):
A fabricated report about Brigitte Macron using years-old AFP footage, AI-generated voices, and a fake interview with a surgeon who does not exist
A fabricated article and video (June 2025) accusing German Chancellor Friedrich Merz of illegally killing a polar bear, featuring a fake interview with an “Inuit guide”
Techniques: Industrial-scale AI content generation, multilingual output, burner accounts, typosquatted domains mimicking legitimate media (BBC, Le Monde, Der Spiegel look-alikes), deepfake video and audio, coordinated bot amplification.
Key judgement: Russia has by far the largest and most operationally sophisticated disinformation operation aimed at the United States and Western Europe. Nothing else comes close on scale, funding, or operational track record.
2.2 China
China’s operations have evolved significantly from the crude Spamouflage campaigns of 2023-2024. Spamouflage was Beijing’s large-scale influence operation combining spam content with camouflage techniques to blend into authentic platform activity, effective at volume, but easily identifiable. The current generation is harder to detect.
Scale: Over 330 identified inauthentic accounts across X, Tumblr, Blogspot, Quora, and YouTube. Between December 2025 and February 2026, coordinated material was posted to manipulate platform algorithms.
Techniques:
Deepfake news anchors with fictitious Western names and faces delivering Beijing’s messaging in English
AI-generated fake news websites distributing Beijing-aligned narratives in multiple languages simultaneously
Six distinct operational clusters targeting different audiences with different narratives, all aligned with Beijing’s strategic interests
Key judgement: Where Russia aims for disruption and chaos, China focuses on narrative shaping, gradually shifting opinion on Taiwan, the South China Sea, and the legitimacy of authoritarian governance. China’s operations are becoming more sophisticated but still lack Russia’s operational flair for timing and electoral disruption.
Model provenance: a decision for everyone
Chinese-developed AI models embed Chinese Communist Party (CCP)-aligned narratives in their training data. As these models gain international adoption through open-source distribution, they become a passive influence vector: shaping how users receive information on sensitive topics without any overt campaign and with no labelling requirement. This is not only a corporate procurement question. If you use an AI assistant for research, news summaries, or analysis, ask where it was trained and what it was trained on. A free model you downloaded last month may be quietly shaping how you understand Taiwan, Xinjiang, or the South China Sea. Organisations selecting models for internal or client use face the same question at larger scale. In both cases, the answer is: check the provenance before you rely on the output.
2.3 Iran
The 2024-2026 Iran-Israel conflict is the first hot war in which AI-augmented information operations have been deployed at scale by both sides. Pro-Iranian actors fabricated imagery of destruction across Israeli cities, manipulated street images into scenes of devastation, and created false before-and-after sequences. Press TV published a fake video of Tel Aviv being struck by a missile. Five distinct TikTok propaganda strategies shaped Western perceptions of the conflict in real time.
Key judgement: Iran remains less polished than Russia and more reactive than proactive, but the conflict has accelerated Iranian capability faster than any peacetime development could. Watch this actor more carefully in 2027.
2.4 Scale and Convergence
North Korea: OpenAI disrupted deceptive employment campaigns likely aimed at revenue generation and access to Western technology companies
Sino-Russian convergence: CEPA (the Center for European Policy Analysis) documents growing coordination between China and Russia in information manipulation, including shared narratives and sometimes shared infrastructure
Overall scale: OpenAI alone has disrupted 20+ covert influence operations since early 2025. Documented campaigns have increased significantly since 2023; precise percentage increases vary by source and methodology and should be treated as indicative.
SECTION 3: THE 2026 US MIDTERMS
3.1 Threat Assessment
Confidence level: PROBABLE (heading toward CONFIRMED)
The 2026 US midterms face an unprecedented convergence of disinformation threats. The environment is measurably worse than 2024 across every relevant dimension.
Factor 2024 2026 Direction AI content generation capability Moderate Advanced (voice indistinguishable, video highly convincing) Significantly worse State actor experience with AI tools Experimental Operationally proven Worse Federal election security (CISA) Full strength Gutted (1,000+ staff lost, programmes halted) Much worse Platform content moderation Reduced Further reduced Worse Legal framework for AI deepfakes Minimal Still minimal No improvement Public awareness Growing Moderate, complacency risk Mixed
3.2 The CISA Problem
Confidence level: CONFIRMED
This is the single most significant structural vulnerability heading into the midterms.
Since February 2025, CISA has:
Cut more than a third of its workforce (approximately 1,000 employees lost)
Halted most election-related programmes, including red teams (security teams that simulate attacks to find vulnerabilities), incident response units, and regional election security advisors
Severed relationships with state and local election officials who report that trust is “broken”
A CNN investigation (January 2026) revealed that secret US cyber operations that successfully shielded the 2024 election from foreign interference have been dismantled.
Key judgement: The US is entering the 2026 midterms with its institutional defences at their weakest since 2016. The FY27 budget proposal (announced April 2026) would eliminate CISA’s election security programme entirely.
3.3 Platform Preparedness
Meta: Announced an AI-powered election security plan using C2PA (a global content verification standard; see Section 6) and AI detection to label altered content
X (formerly Twitter): Significantly reduced trust and safety teams and content moderation
TikTok: Remains a significant vector with limited transparency on moderation of US political content
Legal framework: AI deepfakes outpace election law in every US state. Deepfakes spread faster than prosecutors or fact-checkers can respond.
3.4 Key Judgement
The threat to the 2026 midterms is not that a single deepfake will “steal” an election. The threat is cumulative: a constant drip of synthetic content that erodes trust, amplifies polarisation, and creates an environment where voters cannot distinguish real from fake. The 2026 midterms are the first national election cycle where the structural defences have been dismantled faster than the offensive tools have improved. That is a first. It will not be the last.
Confirmed as of April 2026: The NRSC (National Republican Senatorial Committee) deployed AI-generated video content against Texas Democratic Senate candidate James Talarico, fabricating footage of him appearing to speak his own social media posts. This is the clearest confirmed instance of a national party organisation using synthetic media as a campaign tool. Four further confirmed incidents have been documented. Deepfakes in US electoral politics have crossed from fringe to mainstream institutional practice.
SECTION 4: UK AND EUROPEAN EXPOSURE
4.1 Scale of the Threat
Confidence level: PROBABLE (High)
AI disinformation campaigns targeting the UK and EU are well-documented and growing. EU DisinfoLab and EDMO have jointly documented over 400 confirmed campaigns targeting electoral processes, EU institutional integrity, energy security, migration narratives, and transatlantic relations. The scale of Russian operations targeting France and Germany specifically is documented in Section 2.1. Even where these campaigns do not change electoral outcomes, the perception that they might is itself corrosive to democratic legitimacy.
4.2 UK-Specific Threat Assessment
Confidence level: PROBABLE
Russian targeting of the UK (CONFIRMED):
Kremlin-linked troll factories actively target UK politicians and audiences
Senior UK ministers’ social media accounts are specifically targeted
MI6 and CIA chiefs jointly warned (September 2024) that the international order is under threat in a way not seen since the end of the Cold War
UK institutional response:
The Government Information Cell (GIC) has been established to counter Russian disinformation
The NCSC (National Cyber Security Centre) has exposed Russian intelligence cyber campaigns of attempted political interference
The Alan Turing Institute and CETAS (Centre for Emerging Technology and Security) found no evidence AI disinformation meaningfully impacted UK, French, or European election results in 2024
Current political vulnerability:
Political polarisation: Post-Brexit divisions, cost-of-living pressures, and immigration debates create fertile ground for divisive narratives
Trust deficit: Declining public trust in institutions creates an environment where disinformation finds less resistance
Platform regulation gap: The Online Safety Act does not specifically address AI-generated political disinformation at scale
Russian strategic interest: The UK’s position as a leading Ukraine supporter makes it a priority target
Key judgement: The UK is not facing a specific, imminent AI disinformation crisis. The greater risk is not a dramatic deepfake incident but steady erosion of the information environment through persistent, lower-profile synthetic content that is harder to detect and attribute. The UK’s institutional responses (GIC, NCSC) are stronger than the US equivalents, but remain reactive rather than proactive.
4.3 EU Regulatory Response
Confidence level: CONFIRMED
The EU AI Act, fully effective from August 2, 2026 (twelve weeks from the date of this paper), mandates clear labelling of AI-generated or manipulated media, the most significant regulatory response globally. Critical limitations: enforcement mechanisms are still being developed; the Act addresses commercial AI use more than adversarial state disinformation; foreign state actors operating outside EU jurisdiction will not comply; and the labelling requirement addresses production, not detection of unlabelled content already in circulation.
SECTION 5: DETECTION AND DEFENCE
Confidence level: CONFIRMED across this section
5.1 The Detection Gap
The gap between offensive AI capability and defensive detection is widening, not closing. Two statistics define the problem:
Detection tool effectiveness drops 45-50% from lab conditions to real-world deployment
OpenAI’s own detection tool identifies DALL-E 3 images at 98.8% accuracy but flags only 5-10% of images from other AI tools
Detection tools trained on one generation of models become less effective as new models emerge. This is a perpetual catch-up dynamic, and the offence is currently winning it.
5.2 What Detection and Provenance Tools Can (and Cannot) Do
Forensic detection tools (Reality Defender, Sensity AI, Intel FakeCatcher for video, Pindrop for audio) apply AI analysis to identify synthetic content. All carry the same caveat: real-world performance runs 45-50% below benchmark figures. Tools trained on one generation of models lose effectiveness as new models emerge. This is a perpetual catch-up dynamic, and the offence is currently winning it.
Provenance-based approaches address the problem differently. Rather than detecting fakes after the fact (a losing arms race). C2PA (Coalition for Content Provenance and Authenticity) embeds cryptographic metadata at the point of creation, establishing a verifiable chain of custody. Adobe, Microsoft, Google, the BBC, Sony, and Meta are all participants. Consumer hardware is beginning to adopt this natively: Google Pixel 10 signs all photos by default using hardware-backed keys. Samsung Galaxy S25 applies C2PA credentials to AI-edited and AI-generated images only, not standard photographs.
Critical distinction: C2PA does not detect AI content. It verifies origin and history. Adversarial tools will not participate in the system, and that is the fundamental limitation. More devices are creating Content Credentials than checking for them. Gartner (the global technology research firm) predicts that by 2026, 30% of enterprises will no longer consider standalone identity verification reliable.
Key judgement: Detection tools are a necessary but insufficient defence. Provenance is the right long-term approach but population-level protection is years away. The honest assessment is that we are losing the technological arms race, and strategies built on “better detection is coming” are flawed.
5.3 The Tool is the Threat: AI Provenance as a New Attack Surface
Most disinformation analysis focuses on the content being produced. The more insidious threat is the tool producing it. If a state actor can shape the AI application you use rather than the individual piece of content you consume, the influence becomes invisible and continuous.
DeepSeek is the clearest current example. Released in January 2025 by a Chinese company, it achieved the top spot on the US App Store within days and was rapidly adopted across Western businesses and universities. Its model is trained under Chinese content guidelines, which means it filters, frames, and omits in ways fully consistent with Chinese state priorities: it will not discuss certain historical events, frames Taiwan as an internal Chinese matter, and defaults to CCP-aligned perspectives on topics the Chinese state considers sensitive. None of this is disclosed to the user. The user receives confident, fluent, apparently neutral output, and has no mechanism to detect where the framing has been shaped.
TikTok operates the same structural logic at the distribution layer. Its recommendation algorithm, controlled by ByteDance under Chinese legal obligations, determines what content hundreds of millions of users see. The influence is not in individual pieces of fake content. It is in which real content gets amplified, suppressed, or sequenced.
The supply chain risk extends beyond explicitly Chinese products. Open-source AI models can be fine-tuned by anyone, including state actors, before being redistributed. A model downloaded from a public repository may have had subtle bias introduced at the fine-tuning stage, shaping its outputs on specific topics without any visible tell. For enterprise users building internal AI tools on third-party model foundations, the provenance of the base model is a security question, not only a procurement one.
The provenance checklist for AI tools:
Who owns the company, and what legal jurisdiction does it operate under?
Where was the model trained, and on what data?
Has the model been independently audited for political or commercial bias?
What happens to the data you input, and who can access it?
If these questions cannot be answered, the tool should not be used for anything sensitive.
5.4 What Is Not Working
Fact-checking at speed: Deepfakes spread faster than debunks. By the time a correction is published, the original has achieved its purpose.
Platform self-regulation: Reduced trust and safety teams at X and elsewhere create systematic blind spots.
Cross-platform coordination: Campaigns operate across platforms simultaneously, but detection remains siloed.
Legal deterrence: No state actor has faced meaningful consequences for AI disinformation operations, creating no disincentive to escalate.
Key organisations tracking campaigns and building detection capacity: NewsGuard (fake news site tracking), EDMO (European Digital Media Observatory), Graphika (network analysis firm specialising in mapping influence operations), OpenAI Threat Intelligence, the Alan Turing Institute / CETAS, and Stanford Internet Observatory (future uncertain following funding cuts).
SECTION 6: WHAT BALANCES THIS POSITION
The SEVERE rating is the right analytical conclusion. But the evidence that cuts against it is real, and an honest assessment requires engaging it directly rather than burying it.
1. Empirical electoral impact remains thin. The Alan Turing Institute and CETAS (Centre for Emerging Technology and Security) finding that AI disinformation did not meaningfully affect UK, French, or European election results in 2024 is the most important piece of empirical evidence in this paper. Across roughly 70 elections globally in 2024, researchers struggled to identify a single case where AI-generated content demonstrably changed an outcome. The Ireland presidential deepfake, this paper’s strongest case study, was detected and debunked. On one reading, that is a story about defences working.
2. Volume is not the same as impact. “8 million deepfake videos” sounds alarming, but most synthetic content circulates in low-engagement bot ecosystems and rarely reaches persuadable voters. The question that matters is not how much synthetic content exists, but how much reaches and changes the minds of decisive voters. On that question, the evidence is significantly weaker than the volume statistics imply.
3. Political persuasion is hard, even with perfect tools. Decades of political science research show that partisan priors are sticky, and most people source political information from trusted in-group networks, not from random viral content. A convincing deepfake of a politician saying something out-of-character may be more likely to be dismissed by their supporters than to flip them.
4. Domestic disinformation dwarfs foreign AI operations. Partisan media, political campaigns, and organic conspiracy communities produce vastly more disinformation than foreign state actors. The marginal contribution of AI-generated foreign content to the overall information environment may be smaller than the state-actor focus implies.
Net response: The sceptical case is important but does not refute the threat assessment. The issue is structural and forward-looking, not a claim that elections have already been stolen. The cost of producing a convincing fake is zero. The cost of distributing it is zero. The cost of detecting it in time is very high. That asymmetry is the threat, and it will only compound as tools improve.
SECTION 7: WHAT SHOULD BE DONE
For boards and operators:
Three questions for your Risk Committee this quarter:
Do we have a deepfake incident response playbook? Most companies do not.
Do our cyber insurance policies cover losses caused by AI-generated fraud, including deepfake-enabled wire transfer fraud? Most policies are silent on this. Clarify with your broker before August.
Are our AI model procurement decisions creating passive influence risk? Any organisation deploying AI models should know what those models were trained on and whether the training data embeds aligned or adversarial perspectives on sensitive topics.
1. Treat executive impersonation as a financial crime risk, not just a reputational one.
In early 2024, engineering firm Arup lost $25 million (HK$200 million, approximately £20 million) after a finance employee was deceived by a deepfake video call impersonating the CFO and multiple colleagues. This is documented, not theoretical. Voice cloning and deepfake video are now procurement-grade fraud tools. Finance directors and Internal Audit should own this risk alongside Communications, because the loss category is fraud, not PR. The control is simple: any instruction involving significant financial authorisation, M&A information, or sensitive personnel decisions that arrives by video call or voice message must be verified through a separate, pre-agreed channel before acting. The technology to clone a voice from three seconds of audio is free. The verification step costs nothing.
2. EU AI Act readiness: the August 2026 deadline is now twelve weeks away.
Article 50 of the EU AI Act requires clear labelling of AI-generated content, with fines of up to €7.5 million or 1.5% of global annual turnover for transparency violations. Any company deploying AI-generated content in marketing, communications, or customer-facing operations across European markets needs a compliance owner identified by June. The Data Protection Officer or General Counsel should own this unless an AI Governance lead already exists. If the answer to “who owns this?” is unclear, that is the answer.
3. Assess your sector’s disinformation exposure.
ESG-linked smear campaigns, synthetic-media short attacks, and coordinated fake-news targeting of specific companies are a documented and growing category. Mining, energy, pharma, and financial services face the highest exposure. For M&A practitioners: a target company facing an active disinformation campaign against its brand, management, or supply chain should be treated as a material disclosure risk in due diligence. At the document review stage, look for anomalies in brand coverage, unusual clustering of negative media, and patterns inconsistent with a company’s operational history.
For governments and institutions:
Restore CISA’s election security function. The structural vulnerability heading into the 2026 US midterms is the single most actionable near-term risk. This is not a partisan observation, it is a capability assessment. Congressional oversight committees should demand an accounting of gutted capabilities before primary season.
Legislate mandatory provenance for AI-generated political content. The EU AI Act approach, transparency labelling, is the right direction, but will not reach foreign adversarial actors. Domestic political operatives are the more tractable target.
Invest in resilience, not just detection. Detection technology is losing. Media literacy programmes, fact-checking infrastructure, and platform-level friction (slowing viral spread of unverified content) are more durable defences.
Share threat intelligence across allies. Storm-1516 and Doppelganger campaign data sits across VIGINUM (France’s government body for detecting foreign digital interference), EDMO, Graphika, and EU DisinfoLab. A unified allied threat picture does not exist in publishable form. It should.
For individuals:
Establish a family codeword for emergency calls. Voice cloning from three seconds of audio is free. The same technology behind the Arup corporate fraud is routinely used against individuals: a criminal clones a child or grandchild’s voice, calls a parent or grandparent claiming an emergency, and requests an urgent transfer. Agree a shared codeword with close family members for any emergency call involving money or urgent action. If the caller does not know the codeword, hang up and call back on a number you already hold. This costs nothing and defeats the most common AI voice fraud scenario.
Verify before sharing. If content provokes a strong emotional reaction, that is precisely the moment to pause. A three-second check of source and origin eliminates the fastest-moving disinformation vectors.
Diversify information sources. Reliance on a single platform makes individuals more vulnerable to synthetic consensus manipulation, where the appearance of consensus is manufactured rather than organic.
Learn to recognise coordinated campaigns, not just individual fakes. Multiple apparently unrelated sources pushing the same narrative simultaneously is a reliable tell. The tell is not the content; it is the coordination.
Check the provenance of your AI tools. The threat is not only in the content you consume. It is in the tools you use to process it. DeepSeek, the Chinese AI assistant that became one of the most downloaded apps in the West in early 2025, was built by a Chinese company and trained on data curated under Chinese content guidelines. It will not discuss Tiananmen Square, will frame Taiwan in specific ways, and filters outputs on topics the Chinese state considers sensitive, all without telling the user. TikTok’s recommendation algorithm is controlled by a company with legal obligations to the Chinese state, and its content prioritisation is not neutral. These are not hypothetical risks. They are structural features of the product. The same logic applies to any AI assistant, research tool, or platform whose training data, fine-tuning, or ownership is opaque: the tool shapes the output in ways that are very difficult to detect from inside it. Before using an AI tool for anything consequential, ask three questions: who built it, where was it trained, and who can access what you input into it.
The most dangerous outcome of AI-generated disinformation is not that people believe false things. It is that they stop believing anything. That outcome is still preventable. But the window is narrowing faster than the institutions responsible for preventing it are moving.
The Interlock tracks the disinformation threat landscape as it develops. If you found the board recommendations useful, forward this to your CFO or General Counsel. If you want to go further, reply with your sector and we can identify the specific exposure that applies to you.
Subscribe at interlockpub.substack.com. Questions or responses? Write directly to admin@theinterlock.org.
SOURCES
Section 1: State of the Art
WEF: How cognitive manipulation and AI will shape disinformation in 2026
How Malicious AI Swarms Can Threaten Democracy, Science, January 2026
Section 2: State Actors
CEPA: China-Russia Convergence in Foreign Information Manipulation
NBC News: Russia, Iran and China using AI in election interference
Euromaidan Press: Russia floods France with 200 fake news sites
Foreign Policy: Deepfakes shaping views around Iran conflict
Section 3: 2026 US Midterms
Section 4: UK and European Exposure
Section 6: Detection and Defence
DeepFake-Eval-2024: Multi-Modal In-the-Wild Benchmark, arxiv (source for 45-50% real-world accuracy drop)
Section 7: Commercial Recommendations
Arup deepfake fraud case, CNN: $25 million (HK$200 million), confirmed by Arup May 2024. Verified.

