BROADCAST: Our Agency Services Are By Invitation Only. Apply Now To Get Invited!
ApplyRequestStart
Header Roadblock Ad
Climate Denial Bot Networks
Disinformation

The Climate Denial Bot Networks: Investigative Report

By Dispur Today
March 5, 2026
Words: 16849
0 Comments

Why it matters:

  • Climate denial bot networks and algorithms are distorting the public debate on global heating, drowning out scientific consensus.
  • Automated accounts generate a significant volume of synthetic noise, intensifying since 2022 and influencing public opinion and policy.

The public debate on global heating is not a conversation between humans. It is a manufactured conflict where climate denial bot networks and algorithms drown out scientific consensus. Data from Brown University reveals that on an average day, 25% of all tweets discussing climate change are generated by bots. This figure rises to 38% when the topic shifts to “fake science” and 28% for discussions concerning ExxonMobil. These automated accounts do not participate; they dominate the volume of discourse, creating a false impression of controversy where none exists in the scientific community.

The volume of this synthetic noise has intensified since 2022. Analysis by the Center for Countering Digital Hate (CCDH) and researcher Abbie Richards shows a distinct shift in the operational capacity of these networks. Between December 2021 and July 2022, X (formerly Twitter) hosted approximately 30, 000 climate denial tweets per week. Following policy changes and the of trust and safety teams in late 2022, that figure tripled to nearly 110, 000 tweets per week. This surge coincides with the reinstatement of previously banned accounts and the removal of labels identifying state-affiliated media.

These bot networks function as force multipliers for specific narratives. They do not operate at random. Instead, they swarm around key events such as UN Climate Summits (COP) or the release of IPCC reports. During the announcement of the U. S. withdrawal from the Paris Agreement, bot activity spiked from hundreds of posts a day to over 25, 000. The objective is to flood the zone with doubt, making verified information difficult to find. A 2023 report by Climate Action Against Disinformation (CAAD) found that the hashtag #ClimateScam frequently appeared as the top search result on X during COP27 and COP28, even when user engagement with scientific terms was higher.

The following table details the measured increase in synthetic activity during specific high-profile climate events between 2017 and 2024.

Table 1: Bot Activity Surges During Key Climate Events (2017-2024)
EventDateMeasured Bot Activity / Narrative SpikePrimary Source
US Paris Agreement WithdrawalJune 201725, 000+ bot tweets/day (up from <500)Brown University
COP26 (Glasgow)Nov 2021Twice the engagement on denialist posts vs. scienceISD / CAAD
X Platform Policy ShiftJuly 2022Denial tweets rose from 30k to 110k per weekCCDH / Abbie Richards
COP28 (Dubai)Nov 2023#ClimateScam top search result for weeksCAAD
Tenet Media OperationsSep 2023-2423. 5 million views on funded denial contentUS DOJ / CAAD

The impact of this automation extends beyond social media metrics. It shapes public opinion and policy. A 2024 University of Michigan study found that 14. 8% of Americans deny the reality of global heating, a statistic heavily influenced by exposure to social media disinformation. The networks also target individuals. Global Witness reported in 2023 that 50% of prominent climate scientists face online abuse, frequently delivered by coordinated swarms of anonymous accounts. This harassment serves a strategic purpose: to silence experts and intimidate those who might otherwise speak out.

Recent investigations link these networks to state actors. The US Department of Justice revealed in 2024 that Tenet Media, funded by Russian operatives, generated millions of views for content attacking renewable energy and promoting climate denial. These campaigns use “wokewashing” tactics, framing renewable energy projects as harmful to local communities or wildlife, such as the unfounded narrative linking offshore wind turbines to whale deaths. The integration of state-sponsored propaganda with automated amplification creates a sophisticated engine for disinformation that traditional fact-checking methods struggle to counter.

Methodology: Distinguishing Climate Denial Bot Networks from Organic Skepticism

The identification of synthetic actors in climate discourse relies on a combination of network analysis, behavioral heuristics, and machine learning classifiers. Researchers do not simply look for accounts that disagree with the Intergovernmental Panel on Climate Change (IPCC); they look for non-human patterns of activity. The primary tool for this verification during the foundational 2020 Brown University study was the Botometer, a machine learning algorithm developed by Indiana University. This system evaluates accounts based on over 1, 000 features, including tweet frequency, network centrality, and sentiment variability.

In the Brown University analysis of 6. 5 million tweets surrounding the U. S. withdrawal from the Paris Agreement, accounts were assigned a score from 0 to 5. Accounts scoring above a specific threshold, frequently set at 0. 43 or 0. 50 depending on the required sensitivity, were classified as bots. The data showed that while organic contrarians might post a few times a day, synthetic accounts frequently exceeded human capabilities, posting hundreds of times in 24 hours. These accounts also exhibited “temporal synchronization,” where thousands of profiles would tweet identical hashtags or links within milliseconds of each other, a statistical impossibility for uncoordinated human users.

Table 2. 1: Behavioral Markers of Synthetic vs. Organic Climate Skepticism
MetricSynthetic Bot NetworkOrganic Skeptic / Contrarian
Activity VolumeHigh-frequency (50, 144+ tweets/day). Operates 24/7 without sleep pattern.Human-paced (1, 20 tweets/day). Shows gaps for sleep and work.
Content VarianceHigh repetition of identical phrases, memes, or URLs. Low original thought.Varied vocabulary, personal anecdotes, and specific, unique arguments.
Network BehaviorRetweets specific “hub” accounts within seconds. rarely engages in threaded replies.Engages in arguments, replies to threads, and interacts with diverse accounts.
Profile Metadatafrequently absence bio, uses stock photos or stolen avatars, creation dates clustered together.Distinct personal bio, unique photos, account age varies naturally.
ObjectiveAmplify discord and perceived prevalence of denialist views.Express ideological disagreement or economic concerns regarding policy.

Beyond metadata, advanced methodology employs Natural Language Processing (NLP) to categorize the content of the denial. The Center for Countering Digital Hate (CCDH) use a model known as CARDS (Computer-Assisted Recognition of Denial and Skepticism). Developed by researchers including Travis Coan and John Cook, this tool uses supervised machine learning to detect specific rhetorical claims. Unlike simple keyword searches, CARDS is trained to recognize the shift from “Old Denial” (rejecting the existence of warming) to “New Denial” (attacking solutions and scientists). This distinction is important for filtering bots, as automated networks are frequently reprogrammed to pivot instantly to new narratives, such as “grid failure” or “cost of living”, in response to news pattern.

The distinction between an organic skeptic and a bot is also defined by network topology. Organic skeptics exist in “small world” networks with high clustering coefficients, friends of friends know each other. Bot networks, conversely, frequently form “hub-and-spoke” structures where thousands of unconnected leaf nodes amplify a single central influencer. When researchers map these interactions, the synthetic nature becomes visible not just through individual behavior, through the artificiality of the crowd itself. This rigorous separation of human dissent from automated noise is the only way to accurately measure public opinion.

The Platform Shift: Analysis of the ClimateScam Surge

The digital infrastructure of climate denial underwent a structural metamorphosis in the second half of 2022. While automated disinformation has long plagued social media, a specific and quantifiable “platform shift” occurred following the acquisition of Twitter ( X) by Elon Musk. This was not a change in ownership a fundamental rewiring of the algorithmic reward systems that govern public discourse. Data from the Climate Action Against Disinformation (CAAD) coalition and researcher Abbie Richards isolates July 2022 as the tipping point. Prior to this date, the platform hosted an average of 30, 000 climate denial tweets per week. By the end of 2022, that figure had nearly tripled to 110, 000 tweets per week, a surge that correlates directly with the of the platform’s trust and safety teams.

This explosion of denialist content was not organic. It was engineered by policy decisions that monetized verification. A serious analysis by the Center for Countering Digital Hate (CCDH) found that accounts purchasing the “blue check” verification were responsible for spreading four times as much climate misinformation as unverified users. These accounts, previously reserved for authenticated public figures, were available to anyone for a monthly fee, granting their content algorithmic priority. Consequently, the denialist narrative shifted from the fringes to the “For You” feeds of millions of users who had never engaged with such content. The algorithm began to favor controversy and toxicity over accuracy, creating a feedback loop where anti-science rhetoric generated the high engagement metrics the system was designed to amplify.

The #ClimateScam Anomaly

The most evidence of this algorithmic manipulation is the weaponization of the hashtag #ClimateScam. In late 2022 and throughout 2023, users searching for neutral terms like “climate” or “environment” were frequently served #ClimateScam as the top auto-complete suggestion. This occurred even when verified data showed that hashtags like #ClimateEmergency and #ClimateAction had significantly higher raw engagement and post volumes. The platform’s recommendation engine artificially inflated the visibility of the denialist tag, funneling curious users into a pipeline of disinformation.

Table 3. 1: The “Platform Shift” Metrics (2022-2023)
MetricPre-Acquisition (Jan-Jun 2022)Post-Acquisition (Jul-Dec 2022)% Change
Weekly Denial Tweets~30, 000~110, 000+266%
#ClimateScam Search RankNot Ranked#1 SuggestionN/A
Hate Speech/ToxicityBaselineDoubled+100%

The nature of the denial itself also mutated during this period. We witnessed a transition from “Old Denial”, the outright rejection of warming, to “New Denial,” which focuses on attacking solutions and scientists. CCDH reports indicate that by 2023, “New Denial” narratives constituted 70% of all climate denial content on YouTube, a trend that was mirrored on X. This strategy is more insidious; it acknowledges the changing climate frames all proposed solutions as “scams,” “hoaxes,” or “totalitarian control method.” This narrative shift allows bad actors to bypass basic fact-checking filters that look for simple phrases like “climate change isn’t real,” while still achieving the goal of delaying action.

“The algorithm is not a neutral arbiter. It is a radicalization engine that has been retuned to prioritize friction. When #ClimateScam outranks #ClimateAction even with having fewer organic interactions, we are looking at a system that is deliberately putting its thumb on the.” , Analysis from CAAD Coalition Report, 2023.

This platform shift has had tangible consequences for the scientific community. A 2023 survey by Global Witness found that online abuse towards climate scientists had intensified, leading experts to withdraw from public engagement entirely. The “digital town square” has been rezoned to exclude expert testimony in favor of paid provocation. The surge in #ClimateScam is not a reflection of shifting public opinion, a metric of algorithmic failure and the successful monetization of manufactured doubt.

Mapping the Nodes: Visualizing Centrality in Disinformation Clusters

The architecture of modern climate denial is not a grassroots web of concerned citizens; it is a highly centralized “hub-and-spoke” system engineered for maximum contagion. Network analysis conducted between 2024 and early 2026 reveals that the vast majority of viral climate disinformation does not originate from the periphery. Instead, it flows from of “super-spreader” nodes, high-centrality accounts that act as the primary injection points for false narratives. A July 2025 study by the Center for Countering Digital Hate (CCDH) and Global Witness found that just ten specific accounts were responsible for 69% of all climate denial content circulating on Facebook, a metric that demonstrates the extreme fragility and artificiality of this ecosystem.

These central nodes function as “,” connecting otherwise clusters of users. In graph theory terms, these accounts possess high “betweenness centrality,” meaning they control the flow of information between the fringe conspiracy communities and mainstream conservative political discourse. A March 2026 report by Graphika, Bot or Not? Understanding Automated Attacks, details how these nodes use “cyborg” accounts, profiles that toggle between automated posting and human intervention, to evade platform detection algorithms. By manually engaging with high-profile users while automating the mass-retweeting of denialist content, these cyborgs “launder” disinformation, giving it a veneer of organic engagement that pure bots cannot achieve.

The Hierarchy of Influence

The structure of these networks is hierarchical. At the top sit the “Command Nodes”, verified accounts belonging to pseudo-media outlets and funded influencers. them are the “Amplification Rings,” dense clusters of automated bots that instantly engage with Command Node content to trigger platform algorithms. The University of Michigan’s 2024 analysis of X (formerly Twitter) data identified that this amplification is not random; it is time-locked to specific geopolitical events, such as COP summits or extreme weather disasters. When a Command Node posts a new narrative, for example, the false claim that “renewable energy caused the 2025 Iberian blackout”, the Amplification Ring responds within seconds, creating a manufactured “trend” that forces the narrative into the feeds of unsuspecting users.

Cluster TypePrimary FunctionKey Metric (2024-2025)Identified Tactics
The “Toxic Ten”Content Creation & Injection69% of Facebook denial volumeCross-platform coordination; “Solutions Denial” narratives.
Misinfluencers (LinkedIn)Professional LegitimacyTop 5% authors generate 39% of postsCredential abuse; pseudo-scientific white papers.
Cyborg Algorithm Manipulation>3 classification flips per monthMixed manual/auto posting; evasion of “bot” labels.
Echo AmplifiersVolume & Noise49. 6% of total internet traffic (2024)Hashtag flooding; “reply-guy” harassment swarms.

The evolution of these networks has shifted toward “New Denial.” A May 2025 analysis revealed that 70% of climate misinformation on YouTube focuses on attacking solutions, such as electric vehicles or wind turbines, rather than denying the warming itself. This shift requires a different network topology. Instead of the rigid echo chambers of 2020, the 2025 networks are permeable. They aggressively target “fence-sitter” communities, such as agricultural forums or automotive enthusiast groups, using nodes to insert anti-climate talking points into unrelated discussions. For instance, the “bot-like” network targeting Mark Carney in April 2025 did not just stay within political clusters; it permeated financial and agricultural discussion boards, framing climate policy as a direct financial threat to farmers and investors.

This centralization presents a paradox for regulators. While the volume of noise is immense, the sources are few. The infrastructure relies on a limited number of high-value nodes to sustain the illusion of a debate. If platforms were to enforce their own terms of service against these specific super-spreaders, the structural integrity of the entire disinformation network would collapse. The persistence of these nodes suggests that their presence is not an accident of the algorithm, a feature of the engagement economy.

The 25 Percent Rule: How Minority Bot Groups Dominate Discourse

The strategic deployment of automated accounts is not a game of raw numbers; it is a calculated exploitation of sociological tipping points. In 2018, researchers at the University of Pennsylvania, led by Damon Centola, identified a serious threshold for social change: when a committed minority reaches approximately 25 percent of a population, they can reverse established norms and dominate the majority consensus. The bot networks infiltrating climate discourse have hit this exact mathematical benchmark. The Brown University data earlier, placing bot prevalence at 25 percent, is not a coincidence. It is a precise operational target designed to trigger a “complex contagion” that overrides human consensus.

This “25 Percent Rule” explains why a scientific consensus shared by 99 percent of experts and supported by a global majority feels like a raging controversy online. The minority does not need to outnumber the majority; they only need to reach the volume threshold required to fracture the perception of reality. Once this threshold is crossed, the majority falls into a “spiral of silence,” a phenomenon where individuals withhold their views because they falsely believe they are in the minority. The bots do not win by convincing humans; they win by making humans feel.

Recent data from the Nature Climate Change journal (2024) and the 89 Percent Project quantifies the catastrophic success of this strategy. While 89 percent of global citizens demand intensified government action on climate change, these same individuals estimate that only 43 percent of their peers agree with them. This massive “perception gap”, a 46-point deficit between reality and perceived reality, is the direct product of synthetic amplification. Bot networks manufacture a loud, hostile illusion of opposition that drowns out the quiet, confused majority.

The Perception Gap: Reality vs. Manufactured Consensus

The following table illustrates the between actual public sentiment and the distorted reality created by algorithmic amplification, based on 2024-2025 data metrics.

MetricActual Human ConsensusPerceived Consensus (Distorted)Bot/Troll Volume Contribution
Support for Climate Action89%43%~25-38% (Topic Dependent)
Belief in Anthropogenic Warming99% (Scientific) / ~75% (Public)~50% (Perceived Split)High Intensity (Denial Focus)
Willingness to Pay for Solutions69%Unknown (Suppressed)Targeted Economic Fearmongering

The operational capacity of these networks shifted aggressively in 2025. An analysis by Climate Action Against Disinformation (CAAD) revealed that bot clusters are no longer just spreading general denial; they are executing precision strikes against specific policy frameworks and leaders. During the lead-up to the 2025 federal elections in Canada and policy debates in the UK, “bot-like” networks swarmed discussions regarding “Net Zero”, specifically attacking figures like Mark Carney. These accounts exhibited non-human behaviors: hyper-partisan retweeting, sub-second response times, and the recycling of identical text snippets across thousands of “unique” profiles.

These networks function as a “quarantine” method. By flooding the zone with toxicity and ridicule, they raise the social cost of posting about climate solutions. A human user who posts about renewable energy is statistically likely to be targeted by a bot cluster within minutes, receiving a barrage of bad-faith arguments and insults. This Pavlovian punishment trains the majority to disengage, leaving the 25 percent minority to claim the public square. The result is a digital where the fringe dictates the mainstream narrative, and the majority is silenced by an enemy that does not sleep, does not breathe, and does not care.

Astroturfing Mechanics: Manufacturing False Consensus

The public debate on global heating is not a conversation between humans. It is a manufactured conflict where algorithms drown out scientific consensus. Data from Brown University reveals that on an average day, 25% of all tweets discussing climate change are generated by bots. This figure rises to 38% when the topic shifts to “fake science” and 28% for discussions concerning ExxonMobil. These automated accounts do not participate; they dominate the volume of discourse, creating a false impression of controversy where none exists in the scientific community.

The volume of this synthetic noise has intensified since 2022. Analysis by the Center for Countering Digital Hate (CCDH) and researcher Abbie Richards shows a distinct shift in the operational capacity of these networks. Between December 2021 and July 2022, X (formerly Twitter) hosted approximately 30, 000 climate denial accounts that operated in coordinated bursts. These were not random skeptics “retweet rings”, clusters of accounts programmed to amplify specific hashtags like #ClimateScam simultaneously. This tactic, known as “hashtag hijacking,” exploits platform algorithms to force fringe denialist narratives into the “Trending” sidebars of millions of users who never searched for the topic.

The “Sock Puppet” Ecosystem

Beyond automated bots, the industry uses “sock puppets”, accounts controlled by paid operatives posing as concerned citizens. This technique is most visible in the sudden proliferation of “grassroots” anti-renewable energy groups. A 2023 investigation by Brown University’s Climate and Development Lab, titled Against the Wind, mapped the financial DNA of these organizations. The study found that groups appearing to be local concerned citizens were actually nodes in a centrally funded network.

The report identified that 17 key think tanks and advocacy groups involved in fighting offshore wind projects received $72 million in funding between 2017 and 2021 from six major fossil fuel-linked donors, including the Charles Koch Foundation and the American Fuel & Petrochemical Manufacturers. These national bodies then created or supported local “front groups” to simulate community outrage.

“Grassroots” Front GroupParent/Affiliated OrganizationVerified Funding Source / Link
Save Our Beach ViewCaesar Rodney Institute (CRI)American Fuel & Petrochemical Manufacturers (AFPM)
Save the Right Whales CoalitionEnvironmental ProgressMichael Shellenberger (Founder); Linked to CRI
American Coalition for Ocean ProtectionCaesar Rodney InstituteAmerican Energy Alliance ($15k direct grant)
Texas Public Policy Foundation (Campaign)State Policy NetworkKoch Network; $500k raised for anti-wind ads

Weaponizing Empathy: The Whale Strategy

The most cynical application of astroturfing involves the weaponization of environmental concern against itself. In 2023, the Texas Public Policy Foundation (TPPF) launched a $500, 000 campaign featuring ads that blamed offshore wind turbines for whale deaths. One widely circulated image depicted a dead whale on a beach with wind turbines superimposed in the background, a digital fabrication. Verified necropsy data from NOAA Fisheries confirms that zero whale deaths have been attributed to offshore wind equipment; the primary causes remain vessel strikes and fishing gear entanglement. Yet, the TPPF campaign successfully shifted the narrative, with “New Denial” tactics (attacking solutions rather than science) constituting 70% of climate denial content on YouTube in 2023, up from just 35% in 2018.

These networks operate as an echo chamber. A fringe claim originates from a “sock puppet” account, is amplified by a “retweet ring” of 30, 000 bots, trends on the platform, and is picked up by mainstream media outlets reporting on “growing local opposition.” The consensus is not lost; it is buried under a landslide of algorithmic lies.

The Role of Generative AI: LLMs Writing Denialist Copy

The automation of climate disinformation has transitioned from crude, repetitive scripts to sophisticated, context-aware fabrication. Large Language Models (LLMs) have collapsed the cost of producing denialist propaganda to near zero, enabling bad actors to flood the information ecosystem with high-fidelity falsehoods. Unlike human troll farms, which require wages and management, generative AI operates continuously, producing unique variations of denialist narratives that evade traditional spam filters. By late 2024, this technological shift resulted in a “perfect storm” where synthetic content began to outpace human discourse on key climate policy topics.

The most distinct impact of generative AI is the strategic pivot from “Old Denial” (rejecting the existence of global warming) to “New Denial” (attacking solutions and scientific integrity). A 2024 analysis by the Center for Countering Digital Hate (CCDH) utilized AI to process over 12, 000 YouTube transcripts, revealing that “New Denial” narratives constitute 70% of all climate denial content, up from 35% in 2018. LLMs are specifically prompted to generate arguments that do not flatly deny warming, which is easily fact-checked, instead sow doubt about the reliability of solar energy, the feasibility of electric vehicles, or the integrity of climate scientists.

Shift in Climate Denial Narratives (2018, 2023)
Source: Center for Countering Digital Hate (CCDH), January 2024
Narrative Category2018 Share of Claims2023 Share of Claims% Change
Old Denial (e. g., “Global warming isn’t real”)65%30%-54%
New Denial (e. g., “Solutions won’t work”)35%70%+100%
Sub-claim: “Climate solutions won’t work”9%30%+233%
Sub-claim: “Science is unreliable”23%35%+52%

This shift is supported by a sprawling infrastructure of AI-generated “pink slime” news sites. NewsGuard, a disinformation tracking organization, identified 2, 089 AI-generated news and information sites by December 2024. These outlets, with generic names like “iBusiness Day” or “Daily Time Update,” use LLMs to rewrite legitimate news stories with denialist spins or fabricate events entirely to generate programmatic ad revenue. The is industrial; these sites operate with little to no human oversight, churning out thousands of articles daily that dilute authoritative climate reporting in search engine results.

The models themselves display a high susceptibility to propagating these narratives when tested. In an August 2023 audit, NewsGuard found that GPT-4 generated misinformation in response to 98% of prompts based on known conspiracy theories. While safety guardrails improved slightly by December 2024, leading chatbots still repeated false claims in 40. 33% of tests. This “hallucination” rate is weaponized by denial networks, which use “jailbroken” or uncensored versions of these models to produce content that mainstream AI safety would block.

A notable instance of this weaponization occurred in early 2025, when a paper titled “A serious Reassessment of the Anthropogenic CO2-Global Warming Hypothesis” went viral on X (formerly Twitter). The document, which claimed to refute the greenhouse gas effect, was marketed as being authored by xAI’s Grok 3. While the AI’s official account later denied authorship, the incident demonstrated how the veneer of “AI objectivity” is used to launder anti-science rhetoric. Similarly, in December 2023, prominent climate denier Alex Epstein launched “Alex Epstein AI,” a custom bot explicitly designed to debate climate scientists and spread fossil-fuel advocacy arguments, marking the arrival of personalized, automated denial agents.

Multi Platform Migration: From Fringe Boards to Mainstream Feeds

The dissemination of climate disinformation is no longer a series of incidents; it is a functioning industrial supply chain. Narratives do not simply appear on mainstream feeds; they are incubated in unregulated fringe communities, tested for engagement, and then systematically bridged to major platforms. Analysis by the Climate Action Against Disinformation (CAAD) coalition in 2024 confirms that this “borderless” ecosystem allows bad actors to seed conspiracies in low-moderation environments like Telegram and 4chan before amplifying them on X (formerly Twitter), Facebook, and YouTube.

The incubation phase frequently occurs on imageboards such as 4chan and 8kun. A 2021 investigation by Draft News identified these platforms as primary testing grounds where users coordinated the weaponization of Wikipedia links to support denialist arguments. Once a narrative, such as the “Climate Lockdown” conspiracy, gains traction in these echo chambers, it is migrated to platforms like Telegram. Here, specific channels act as coordination hubs, instructing followers to flood mainstream comment sections. Data from the Institute for Strategic Dialogue (ISD) indicates that this cross-platform coordination has become increasingly sophisticated, with Telegram groups serving as command centers for raids on scientific posts on X.

The acquisition of Twitter by Elon Musk in July 2022 marked a serious turning point in this migration pipeline. Prior to the acquisition, the platform hosted an average of 30, 000 climate denial tweets per week. Following the of trust and safety teams, CAAD and researcher Abbie Richards found that this figure tripled to approximately 110, 000 tweets per week by 2023. This policy vacuum removed the filter that once kept fringe-incubated hate speech and anti-science rhetoric from dominating the central town square. The result is a mainstream environment where manufactured consensus from the fringe is presented as legitimate public debate.

This migration has coincided with a tactical shift in the content itself. As outright denial of warming becomes harder to sustain against visible extreme weather, bot networks have pivoted to “New Denial.” A January 2024 report by the Center for Countering Digital Hate (CCDH) utilized AI to analyze over 12, 000 YouTube videos, revealing that attacks on climate solutions and the integrity of scientists constitute the majority of denialist content. This “New Denial” is designed to be more palatable to mainstream algorithms than the blunt rejection of physics found on 4chan, allowing it to bypass basic moderation filters while still delaying action.

The Narrative Shift: Old vs. New Denial

The following table illustrates the strategic pivot in denialist narratives on YouTube, a primary destination for content migrated from fringe boards. The data compares the prevalence of specific narrative types between 2018 and 2023.

Evolution of Climate Denial Narratives on YouTube (2018, 2023)
Narrative Category2018 Share of Claims2023 Share of ClaimsChange
Old Denial (e. g., “Global warming isn’t happening”)65%30%-35%
New Denial (e. g., “Solutions won’t work”, “Science is unreliable”)35%70%+35%

Source: Center for Countering Digital Hate (CCDH), “The New Climate Denial” Report, January 2024.

The operational success of this pipeline is clear in the reach of these narratives. In 2023 alone, YouTube videos containing these “New Denial” claims amassed over 325 million views. This volume drowns out authoritative scientific communication. For instance, during the 2022 UN Climate Change Conference (COP27), CAAD analysts observed that verified climate science content was frequently eclipsed in engagement by hostile narratives that had originated in fringe Telegram groups days earlier. The migration is complete: the fringe has not just entered the mainstream; it has colonized it.

The Financial Trail: Dark Money Behind the Algorithms

The algorithmic amplification of climate denial is not an organic phenomenon; it is a purchased service. While the bots themselves are digital, the capital that sustains them is tangible, traceable, and massive. Investigations into the financial structures underpinning these networks reveal a sophisticated “dark money” pipeline designed to anonymize fossil fuel interests before they reach the digital front lines. The primary method for this obfuscation is the Donor-Advised Fund (DAF), a financial vehicle that allows wealthy contributors to receive immediate tax deductions while shielding their identities from public scrutiny.

At the center of this web sits Donors Trust, a group frequently described as the “Dark Money ATM” of the conservative movement. Between 2020 and 2024, Donors Trust and its affiliates funneled hundreds of millions of dollars to organizations that actively generate the content fed into bot networks. In 2024 alone, Donors Trust distributed $195. 3 million to right-wing influence groups, with $8. 1 million specifically earmarked for ten organizations actively peddling climate misinformation. This structure allows entities like Koch Industries and ExxonMobil to ostensibly distance themselves from the digital toxic waste they finance, while ensuring the “New Denial” narratives, which attack climate solutions rather than the science itself, receive steady funding.

The of this subsidization is. A 2024 report by the Climate Accountability Research Project estimated that between 2020 and 2022, at least $219 million in tax-subsidized charitable donations flowed to organizations promoting climate disinformation. The true figure may be as high as $1 billion, obscured by the opacity of DAFs. This capital does not just pay for white papers; it funds the production of slick, shareable video content and memes that bot networks are programmed to amplify. The Center for Countering Digital Hate (CCDH) found that this funding shift has enabled a strategic pivot: 70% of climate denial content on YouTube in 2023 focused on attacking solutions (such as renewable energy reliability) rather than denying global warming directly, a narrative shift paid for by these hidden donors.

Recipient OrganizationPrimary Function in Bot EcosystemEst. Dark Money Received (Selected Years)
Competitive Enterprise Institute (CEI)Produces “policy” content attacking net-zero; highly circulated by bot clusters.$2. 8 million (2024 via Donors Trust)
Heartland InstituteGenerates “fake science” reports and “climate realism” narratives for social distribution.$13. 3 million (2014, 2020 via Donors Trust/Capital Fund)
Atlas Network AffiliatesCoordinates global “grassroots” social media campaigns against climate policy.Undisclosed (Funded via Exxon/Koch channels)
The Epoch TimesHigh-volume publisher of denial content; monetized via programmatic ads.~$960, 000 (2024 Google Ad Revenue est.)

The Atlas Network, a global coalition of more than 500 free-market think tanks, plays a serious role in the internationalization of this funding. Investigations reveal that the Atlas Network has received funding from ExxonMobil and the Koch network to “grassroots” opposition to climate action in regions like Latin America and Australia. These campaigns are designed to look organic are operationally dependent on the same dark money streams. In 2025, documents surfaced showing that the Atlas Network’s strategy involves creating “digital pressure” on policymakers, a euphemism for the deployment of coordinated social media attacks frequently executed by automated accounts.

This financial trail also implicates major mainstream financial institutions. Since 2020, commercial donor-advised funds managed by Fidelity, Schwab, and Vanguard have processed over $171 million in anonymous grants to groups aligned with the Project 2025 climate denial agenda. These financial giants serve as the laundering method, stripping the “fossil fuel” label from the money before it reaches the content creators. The result is a fully funded, self-sustaining ecosystem of disinformation where the original source of the capital, the entities with the most to lose from decarbonization, remains invisible to the public eye.

“We have totally unimpeachable evidence that Exxon accurately predicted global warming years before it turned around and publicly attacked climate science and scientists. The funding we see today is simply the modern, digital evolution of that same deception strategy.” , Geoffrey Supran, Harvard University Research Fellow (2023)

The monetization of this content completes the pattern. Platforms like Google and YouTube profit directly from the ads running alongside this denial material. The Center for Countering Digital Hate estimates that Google makes millions annually from ads on climate denial content, creating a perverse incentive structure where the platforms hosting the bots and the misinformation have a financial interest in their continued proliferation. The money does not just buy bots; it buys the silence and complicity of the digital infrastructure itself.

Fossil Fuel Intermediaries: Tracing Shell Companies

The architecture of synthetic climate denial relies on a sophisticated financial laundering system designed to sever the link between fossil fuel extraction and public disinformation. Major oil and gas conglomerates do not pay bot farm operators directly. Instead, they use a multi- network of intermediaries, donor-advised funds, and public relations firms to obfuscate the source of capital. This “dark money” pipeline ensures that while the digital footprint of denial is visible, the hand writing the checks remains hidden behind a veil of corporate anonymity.

At the center of this obfuscation is Donors Trust, a donor-advised fund frequently described as the “Dark Money ATM” of the conservative movement. Between 2020 and 2022 alone, verified tax records indicate that approximately $219 million flowed through such unclear vehicles to organizations actively promoting climate disinformation. By pooling contributions from multiple donors into a single fund, Donors Trust erases the identity of the original benefactor before grants are disbursed to frontline denial groups and digital operation centers.

The PR-Industrial Complex

The operational execution of these campaigns is frequently subcontracted to elite public relations firms that specialize in “reputation management” and “astroturfing”, the creation of fake grassroots movements. Investigations by Clean Creatives and congressional oversight committees have exposed a direct line of contracting between fossil fuel majors and these agencies. The F-List 2024 report identified 1, 010 active contracts between fossil fuel companies and advertising or PR agencies, a network that serves as the logistical backbone for synthetic advocacy.

Verified Fossil Fuel Intermediaries & Digital Operations (2015, 2025)
Intermediary / FirmClient / FunderOperational TacticVerified Metric
Donors TrustAnonymous (Koch, Exxon links)Funding obfuscation$146M+ to denial groups (2002, 2011); $219M+ (2020, 2022)
Story PartnersNoble Energy (Chevron)Astroturf campaignsCreated fake “grassroots” groups to defeat Colorado setbacks
FTI ConsultingMainstream Oil & GasFake news websitesCreated pro-fracking sites posing as citizen journalism
EdelmanExxonMobil, ShellTrust barometers / Ad buysRetained fossil fuel clients even with 2022 climate pledge
Atlas NetworkExxonMobil, KochGlobal think-tank coordinationCoordinated “Vote No” disinformation in Australia

The “Astroturf” method

The transition from funding to bot deployment occurs within the “digital strategy” departments of these contracted firms. For instance, FTI Consulting was caught creating fake pro-fracking news portals that appeared to represent local citizen interests were entirely industry-fabricated. Similarly, Story Partners, hired by Noble Energy ( Chevron), engineered astroturf campaigns to crush local environmental regulations in Colorado. These firms do not buy ads; they build entire synthetic ecosystems. They hire “clout” management services that use automated accounts to amplify specific narratives, creating a “bandwagon effect” that drowns out organic public dissent.

Recent regulatory actions confirm the of this deception. In late 2024, the Federal Trade Commission (FTC) finalized new rules specifically targeting the purchase of “fake social media indicators,” citing the prevalence of bot-generated reviews and followers in deceptive marketing. This crackdown highlights the industrial of the problem: the “reviews” and “comments” flooding climate policy discussions are frequently commercial products, bought and sold by intermediaries to the perceived support for fossil fuel expansion.

The Atlas Network Connection

Beyond domestic PR firms, the Atlas Network serves as a global distribution hub for denial narratives. Documents from 2023 and 2024 reveal that Atlas-affiliated think tanks in Australia and Latin America received funding to replicate successful U. S. disinformation strategies. In the case of the Australian “Voice to Parliament” referendum, Atlas-linked entities deployed identical “astroturf” tactics used against offshore wind projects in the United States, utilizing a mix of dark money and digital amplification to manipulate public sentiment. This global franchising of denial ensures that a bot technique perfected in Texas can be rapidly deployed in Sydney or Brazil, paid for by the same multinational revenue streams routed through different local shell entities.

The Ad Tech Ecosystem: Monetizing Climate Misinformation

Methodology: Distinguishing Bots from Organic Skepticism
Methodology: Distinguishing Bots from Organic Skepticism

The financial engine driving the proliferation of climate denial is not a shadowy conspiracy of handshakes in backrooms; it is an automated, algorithmic marketplace operating in plain sight. Programmatic advertising, the automated buying and selling of online ad space, has inadvertently created a revenue pipeline for climate disinformation. Brands with stated sustainability goals are frequently finding their marketing budgets funneled into the pockets of denialist outlets, facilitated by major ad tech intermediaries that prioritize inventory volume over content verification.

Research conducted by the Center for Countering Digital Hate (CCDH) in January 2024 exposed the of this monetization on YouTube. even with Google’s 2021 pledge to demonetize content denying the scientific consensus on climate change, the platform continues to profit from a “New Denial” narrative. This evolved strategy shifts focus from rejecting global heating entirely to attacking solutions like renewable energy and electric vehicles. The CCDH analysis estimates that YouTube generates up to $13. 4 million annually in ad revenue from channels propagating these narratives. The creators of this content exploit keyword blocking failures, using terms that evade standard demonetization filters while still delivering anti-science rhetoric to millions of viewers.

The programmatic supply chain is equally porous for display advertising on websites. A separate investigation by the Climate Action Against Disinformation (CAAD) coalition in October 2022 analyzed 113 prominent climate disinformation sites. The data revealed that Google’s display ad network was responsible for placing ads on nearly half of these sites, chance generating $7. 67 million in annual revenue. This system automates the placement of ads for major brands, including Costco, Politico, and Tommy Hilfiger, alongside content that claims climate change is a hoax or a “scam.” These brands are frequently unaware their ad spend is subsidizing the very misinformation they publicly disavow.

The following table details specific instances of ad tech platforms monetizing disinformation outlets between 2021 and 2024, highlighting the disconnect between corporate policy and algorithmic enforcement.

Table 11. 1: Ad Tech Monetization of Climate Disinformation (2021, 2024)
Platform / IntermediaryDisinformation Outlet / CaseEstimated Revenue / Ad VolumePolicy Violation
Google / YouTube96 Denial Channels (e. g., “New Denial” narratives)$13. 4 Million / Year (Est.)Monetization of content attacking climate solutions even with 2021 ban.
Google Display NetworkThe Epoch Times$1. 4 Million / Year (Combined)$960k to publisher; $450k to Google. Ads ran on denialist articles.
Criteo, OpenX, TaboolaBreitbart, Newsmax, TownhallUndisclosed (High Volume)Placement of ads on sites rejecting human-caused climate change.
Meta (Facebook/Instagram)Fossil Fuel “Greenwashing” Campaigns£800, 000 (BP alone, 7 mos 2022)Accepting ad spend for misleading “net zero” claims that violate greenwashing codes.

The mechanics of this ecosystem rely on the “long tail” of the internet. Ad exchanges like those owned by Microsoft (AppNexus), Amazon, and Criteo aggregate inventory from thousands of websites. While they maintain policies against hate speech or misinformation, enforcement is reactive and inconsistent. A December 2023 report identified that exchanges owned by these tech giants placed ads on at least 8 of 15 high-profile denial websites analyzed. The revenue split favors the publisher, meaning every ad impression served on a denialist article directly funds the creation of more such content.

also, the definition of “denial” used by these platforms remains dangerously narrow. Most enforcement algorithms are trained to flag statements explicitly stating “climate change is not real.” They frequently fail to catch the more pervasive “delayism”, arguments that acknowledge warming falsely claim mitigation is too expensive, impossible, or authoritarian. This gap allows bad actors to monetize 70% of the denial content currently circulating, rendering the 2021 demonetization policies obsolete before they were even fully enforced.

State Sponsored Interference: Petro State Cyber Operations

The digital architecture of climate discourse is under sustained assault from state-aligned actors. Beyond commercial astroturfing, sovereign entities, specifically petro-states, have deployed military-grade cyber operations to manipulate public perception, harass activists, and derail international negotiations. Evidence from 2022 to 2025 indicates a strategic bifurcation: while Russia use chaos agents to fracture Western consensus, Gulf monarchies use sophisticated “greenwashing” botnets to insulate their fossil fuel revenues.

The “Doppelgänger” Campaign and Russian Chaos Agents

Russia’s interference strategy has evolved from simple denial to a complex “cognitive war” designed to paralyze climate policy through social polarization. A February 2025 report by Poland’s military counterintelligence service identified a long-term operation where Russian state media and bot networks systematically injected “climate of chaos” narratives into European discourse. This aligns with the “Doppelgänger” campaign exposed by EU DisinfoLab, which utilized AI to clone legitimate media sites (such as The Guardian and Bild) to host fabricated articles attacking climate policies.

The Center for Countering Digital Hate (CCDH) quantified the impact of these shifting tactics in a January 2024 analysis. Their data revealed a massive pivot to “New Denial”, narratives that accept climate change is happening solutions are unworkable or “globalist plots.” By 2023, these narratives constituted 70% of all climate denial content on YouTube, up from 35% in 2018. In the United States, the Department of Justice’s 2024 indictment of Tenet Media exposed a direct financial link, alleging that Russian state employees funneled nearly $10 million to a U. S. media company to propagate divisive content, including climate denial, under the guise of domestic political commentary.

Gulf Monarchies: Automated Greenwashing at COP Summits

In contrast to Russia’s chaos-driven method, the United Arab Emirates (UAE) and Saudi Arabia have deployed cyber assets to manufacture consent for continued fossil fuel extraction. During the lead-up to COP28 in Dubai, digital forensic analysis by researcher Marc Owen Jones uncovered a coordinated network of at least 1, 900 X (formerly Twitter) accounts dedicated to promoting the UAE’s climate credentials. This “astroturf” army, part of a larger cluster of 7, 000 accounts, systematically amplified posts praising COP28 President Sultan Al Jaber, who also serves as CEO of the state oil company ADNOC, while attacking critics of the summit’s fossil fuel agenda.

Saudi Arabia’s operations exhibit similar coordination frequently target regional Arabic-speaking audiences to inoculate them against global decarbonization pressure. Investigations into the “Diavolo” network and other state-aligned clusters have identified thousands of automated accounts promoting hashtags such as #climate_change_hoax (translated). These networks operate in tandem with official state messaging; for instance, the Saudi Green Initiative ran specific paid advertisements during key climate summits to project an image of sustainability while the Kingdom’s negotiators worked to dilute fossil fuel phase-out language. Climate Action Tracker rates Saudi Arabia’s in total climate action as “serious Insufficient,” a reality these cyber operations are designed to obscure.

Iran: Cyber-Enabled Repression of Environmental Dissent

Iran represents a third distinct model of state interference: the weaponization of cyber dominance to suppress internal environmental desperation. Rather than projecting denial outward, the Iranian regime uses internet blackouts and surveillance to crush protests stemming from water mismanagement and climate-induced drought. During the Isfahan water protests and subsequent unrest between 2022 and 2024, the state repeatedly severed internet access in affected provinces to prevent the organization of activists and the dissemination of footage documenting the crackdown.

Simultaneously, Iranian state-sponsored groups like “CyberAv3ngers” have targeted the serious infrastructure of adversaries, including water and energy systems in the United States and Israel. While frequently framed as geopolitical retaliation, these attacks directly degrade the resilience of environmental infrastructure, creating a feedback loop where cyber warfare exacerbates the physical impacts of the climate emergency.

Table 12. 1: Typology of Petro-State Cyber Interference (2022, 2025)
State ActorPrimary TacticKey Operation / EntityStrategic Objective
Russia“New Denial” & Chaos NarrativesDoppelgänger Campaign; Tenet MediaFracture Western political consensus; delay policy implementation.
UAEAutomated GreenwashingCOP28 Botnet (1, 900+ accounts)Legitimize oil executives leading climate talks; drown out criticism.
Saudi ArabiaDefensive Astroturfing#climate_change_hoax campaignsInoculate domestic/regional audiences against decarbonization.
IranDigital RepressionInternet Blackouts; CyberAv3ngersSuppress internal environmental dissent; disrupt adversary infrastructure.

Narrative Pivot 1: Weaponizing Economic Anxiety

The most significant evolution in automated climate disinformation is not technical, rhetorical. As extreme weather events become undeniable to the average voter, bot networks have largely abandoned the argument that climate change is a hoax. Instead, they have pivoted to a more strategy: weaponizing economic anxiety. By linking climate policies to inflation, high energy prices, and the cost of living emergency, these networks manufacture a consensus that saving the planet is a luxury the working class cannot afford.

Data from the Center for Countering Digital Hate (CCDH) quantifies this shift. In a January 2024 analysis of over 12, 000 videos and millions of social media interactions, researchers found that “New Denial” narratives constitute 70% of all climate denial content, a sharp increase from 35% in 2018. “Old Denial”, the direct rejection of anthropogenic warming, has collapsed to just 30%. The algorithms prioritize content that accepts the climate is warming insists that mitigation strategies are economically ruinous.

This pivot allows bot operators to exploit real-world financial pain. During the 2022 energy emergency, the Institute for Strategic Dialogue (ISD) identified a coordinated surge in bot activity centered on the hashtag #CostOfNetZero. These accounts did not dispute the science of carbon emissions. Instead, they flooded platforms with exaggerated claims about the price of heat pumps and electric vehicles, framing green policies as the primary driver of household poverty. The timing was precise: spikes in anti-net-zero bot traffic consistently correlated with the release of inflation data in the UK and US.

The Shift: Old Denial vs. New Denial (2018, 2023)
Narrative Category2018 Share of Denial Content2023 Share of Denial ContentPrimary Bot Argument
Old Denial65%30%“Global warming is not happening” / “It is natural pattern”
New Denial35%70%“Solutions won’t work” / “Policies are too expensive”
Science Attacks23%35%“Climate scientists are unreliable/corrupt”
Solutions Attacks9%30%“Green energy causes blackouts” / “Wind farms kill whales”

The Climate Action Against Disinformation (CAAD) coalition reported in September 2024 that this economic weaponization is highly organized. Their analysis of 30, 000 posts across X (formerly Twitter), Facebook, and Telegram revealed a tactical playbook where bots piggyback on legitimate political announcements. When a government official mentions a green initiative, bot swarms immediately reply with “green levy” rhetoric, drowning out supportive human engagement. This creates a “ratio” effect, where the top comments on climate policy posts are almost exclusively negative and focused on taxation.

This strategy is particularly potent because it bypasses content moderation filters. Platforms like YouTube and Meta have policies against outright scientific denial, they rarely police debates over economic policy. By framing disinformation as a fiscal concern, these networks operate with impunity. A 2023 investigation found that fossil fuel-linked entities spent approximately $4 million on Meta ads during COP27, of which used cost-of-living arguments to delay the phase-out of oil and gas. The bots amplify these paid messages, creating an echo chamber where the only “sensible” economic choice appears to be the continued use of fossil fuels.

The impact on public opinion is measurable. A 2025 report by the International Panel on the Information Environment (IPIE) noted that bot-amplified narratives blaming renewable energy for power grid failures, such as the false claims following the Spain blackouts, have successfully lowered support for wind and solar projects in affected regions. The bots do not need to win the scientific argument; they only need to convince the public that the solution is too expensive to implement.

The Pivot to Solutions: “New Denial” and the EV Front

By 2023, the strategy of automated climate denial underwent a radical tactical evolution. As extreme weather events became undeniable to the naked eye, the utility of claiming “global warming is a hoax” collapsed. In response, bot networks executed a synchronized pivot toward what the Center for Countering Digital Hate (CCDH) terms “New Denial.” Instead of attacking the science of the problem, these networks began attacking the validity of the solutions. No single technology has faced a more concentrated, artificial barrage than the electric vehicle (EV).

Data from the CCDH’s January 2024 report, The New Climate Denial, quantifies this shift with precision. In 2018, claims that “global warming is not happening” constituted 65% of all denialist content on YouTube. By 2023, that figure had plummeted to 30%. In its place, narratives claiming “climate solutions won’t work” or “clean energy is dangerous” surged from 9% to 70% of the denialist volume. This was not a natural evolution of public skepticism a retooling of the disinformation.

The “Exploding Battery” Algorithm

The assault on electric vehicles relies on the amplification of visceral, fear-inducing narratives, primarily focusing on fire risks and cold-weather failures. While verified data from 2023 indicates that internal combustion engine (ICE) vehicles are approximately 20 times more likely to catch fire than EVs, bot networks have successfully inverted this reality in the digital public square.

Analysis by Blackbird. AI in 2024 detected high concentrations of “anomalous bot-like activity” surrounding specific anti-EV narratives. One specific campaign focused on the claim that “EV explosions are extremely common.” The study found that over 80% of the accounts spreading a specific video purporting to show “Congolese child slaves” mining cobalt used identical, copy-pasted text strings, a hallmark of automated coordination. These bots do not; they flood the zone with identical, high-emotion imagery to manufacture a consensus of danger.

“The objective is no longer to convince you that the planet isn’t warming. It is to convince you that the medicine is poison. The volume of synthetic traffic attacking EV reliability is not about consumer protection; it is about protecting the market share of fossil fuels.”

Case Study: The “Frozen Autobahn” Hoax

The operational capacity of these networks was fully displayed during the winter of 2023-2024. A viral image circulated on X (formerly Twitter) and Facebook purporting to show a German autobahn “collapsed” because hundreds of electric vehicles had frozen and run out of charge in a snowstorm. The narrative was amplified by thousands of accounts simultaneously, using hashtags like #GreenEnergyFail and #EVScam.

Fact-checkers quickly identified the image as a photograph from a 2011 blizzard in Chicago, depicting gasoline-powered cars trapped on Lake Shore Drive. even with the debunking, the engagement metrics on the bot-amplified posts dwarfed the corrections by a factor of 100 to 1. The damage was done: a 2025 study by JD Power found that while EV owners remained loyal, the percentage of non-EV owners to consider an electric car dropped from 31% to 11% in a single year, a decline directly correlated with exposure to viral misinformation.

Table 14. 1: The Shift from Old to New Denial (2018 vs. 2023)
Source: Center for Countering Digital Hate (CCDH), 2024
Narrative Category2018 Share of Denial Content2023 Share of Denial Content% Change
Old Denial (“It’s not happening”)65%30%▼ 54%
New Denial (“Solutions won’t work”)35%70%▲ 100%
Specific Claim: “Clean energy is unreliable”9%30%▲ 233%

This pivot represents a sophisticated preservation instinct. By conceding the climate emergency exists framing the solutions as elitist, dangerous, or impossible, these networks maintain the without needing to defend the indefensible science of the past. The “New Denial” is not a debate about physics; it is a debate about engineering, designed to fail.

The Ad Hominem Algorithm

The strategy of synthetic denial has undergone a malignant evolution. As the physical evidence of global heating became undeniable, manifesting in charred towns and flooded metropolises, the bot networks ceased trying to debunk the data. Instead, they began to destroy the messengers. This marks the third and most dangerous narrative pivot: a coordinated, algorithmic shift from pseudo-scientific skepticism to targeted character assassination. The objective is no longer to win a debate, to make the act of communicating climate science personally ruinous.

This transition is quantifiable. In April 2023, an investigation by Global Witness surveyed 468 climate scientists worldwide, revealing that the digital public square had become a hostile work environment. The data showed that 39% of all respondents had been subjected to online harassment or abuse specifically related to their work. For those who dared to be , scientists who had published more than ten academic papers, the abuse rate climbed to 49%. The correlation is precise: the more authoritative the voice, the more aggressive the suppression becomes.

Quantifying the Hate

The Platform Shift: Analysis of the ClimateScam Surge
The Platform Shift: Analysis of the ClimateScam Surge

The volume of vitriol is not organic; it is amplified by automation. A 2024 study by the Spanish Foundation for Science and Technology (FECYT) corroborated the Global Witness findings, reporting that 53. 3% of scientists who engaged with the media faced harassment. The primary vector for these attacks was X (formerly Twitter), accounting for nearly 60% of recorded incidents.

These are not angry users venting frustration. They are frequently “bot-like” clusters that swarm specific with synchronized messaging. Analysis from Brown University indicated that on average days, 25% of climate-related tweets are bot-generated, a figure that spikes during coordinated campaigns. These networks use keyword triggers to flood the notifications of prominent researchers with threats of violence, professional defamation, and accusations of “crimes against humanity.”

Impact of Online Harassment on Climate Scientists (2023-2024 Data)
MetricStatisticSource
Scientists Reporting Abuse39% (Global Avg) / 53% (Media Active)Global Witness / FECYT
High-Profile (10+ Papers)49% Abuse RateGlobal Witness
Primary Platform for Abuse59. 86% (X/Twitter)FECYT
Loss of Productivity48% of targeted scientistsGlobal Witness
Anxiety Reported51% of targeted scientistsGlobal Witness

Gendered Violence as a Tactic

The algorithmic assault is not applied equally. It exhibits a distinct, weaponized misogyny designed to silence female researchers. The data reveals a clear: while male scientists face attacks on their professional integrity, female scientists are targeted for their gender. The Global Witness report found that women were three times more likely than men to be attacked based on their identity.

The severity of these threats escalates rapidly. One in eight female scientists who reported abuse had received threats of sexual violence. This is not “trolling” in the traditional sense; it is a terror campaign. The psychological toll is measurable, with 51% of harassed scientists reporting anxiety and nearly half reporting a loss of productivity. The bot networks are imposing a tax on scientific communication, forcing experts to choose between informing the public and preserving their own mental safety.

“The objective is to make the personal cost of speaking the truth higher than the professional obligation to share it. When a scientist pauses before hitting ‘post’ out of fear for their safety, the algorithm has won.”

The “Nuremberg 2. 0” Narrative

A recurring theme in this synthetic hate is the invocation of legal retribution. Bot networks frequently propagate the “Nuremberg 2. 0” narrative, a conspiracy theory suggesting that climate scientists and policy makers should be put on trial for “hoaxes” or “tyranny.” This narrative is not random; it is seeded. By framing scientific consensus as a criminal conspiracy, these networks justify extreme harassment as a form of vigilante justice.

Prominent figures such as Michael Mann and Katharine Hayhoe have been subjected to years of this automated attrition. yet, the scope has widened. In 2025, analysis by Climate Action Against Disinformation (CAAD) identified bot networks targeting financial figures like Mark Carney, linking them to “net zero” conspiracies. The target list is expanding from those who study the climate to anyone who proposes fixing it.

Narrative Pivot 4: Conflating Green Policy with Totalitarianism

The most significant tactical shift in automated climate disinformation since 2020 is the abandonment of scientific debate in favor of political fear-mongering. Bot networks no longer prioritize the argument that carbon emissions are harmless; instead, they assert that climate policy is a pretext for totalitarian control. This pivot, categorized by the Center for Countering Digital Hate (CCDH) as “New Denial,” fundamentally changes the battlefield. According to CCDH analysis of YouTube content, attacks on climate solutions and the integrity of the climate movement rose to 70% of all denialist claims in 2023, up from just 35% in 2018. The algorithms sell the idea that green energy is not just inefficient, a method for state tyranny.

This narrative strategy relies on the fabrication of terms that evoke the loss of personal liberty. The phrase “climate lockdown” appeared in September 2020, twisted from an article by economist Mariana Mazzucato which argued for economic shifts to avoid emergency measures. Bot networks rapidly inverted this premise, flooding X (formerly Twitter) with claims that governments planned to permanently restrict movement under the guise of environmental protection. Between September 2020 and April 2021, analysis shows that 29% of all tweets mentioning “climate lockdown” also referenced the World Economic Forum (WEF) or the “Great Reset,” linking climate policy directly to conspiracy theories about a globalist coup.

Table 16. 1: Evolution of the “Green Tyranny” Narrative (2020-2023)
Time PeriodPrimary KeywordTrigger EventNetwork Behavior
Sept 2020, Dec 2020“Climate Lockdown”Mazzucato Op-Ed / COVID-19 RestrictionsSeed phase. Accounts conflate pandemic restrictions with future climate policy.
Jan 2021, Nov 2021“The Great Reset”WEF Annual MeetingIntegration phase. Bots link carbon taxes to “You own nothing” memes.
Dec 2022, Jan 2023“15-Minute Cities”Oxford Traffic Filter ProposalExplosion phase. Local urban planning twisted into “open-air prison” narrative.
Feb 2023, Present“Climate Scam” / “Matrix”ContinuousSustainment phase. Automated replies spam climate news with “tyranny” accusations.

The weaponization of the “15-minute city” concept demonstrates the speed at which these networks operate. In late 2022, Oxfordshire County Council proposed traffic filters to reduce congestion. Within weeks, a coordinated campaign rebranded this urban planning concept as a prelude to “climate concentration camps.” Data from the Climate Action Against Disinformation (CAAD) coalition reveals that by January 2023, mentions of “15-minute cities” on X usurped “climate lockdowns,” spiking to tens of thousands of mentions per day. These were not organic concerns from local residents; they were amplified by international accounts that had previously focused on anti-vaccine rhetoric.

This pivot serves a specific strategic purpose: it widens the recruitment pool. “Old Denial” appealed only to those who rejected atmospheric physics. “New Denial” appeals to libertarians, anti-government activists, and those economically anxious about inflation. By framing a heat pump mandate as an assault on property rights, bot networks engage users who otherwise have no interest in climate science. The Institute for Strategic Dialogue (ISD) found that this narrative successfully merged with the “Great Reset” conspiracy, creating a “conspiracy smoothie” where climate action is indistinguishable from a plot to destroy capitalism.

The effectiveness of this conflation is measurable. In 2023, even as record temperatures hit the globe, online engagement with “climate scam” hashtags outpaced engagement with “climate emergency” hashtags during key weather events. The networks have successfully inoculated a large segment of the population against policy solutions by pre-defining them as acts of aggression. When a government announces a new emission standard, the automated response is not to question the science, to scream “tyranny.”

Case Study: The Heatwave Disinformation Spike

In July 2023, as global temperatures breached the 1. 5°C threshold for the time in modern history, a parallel surge occurred in the digital sphere. While NASA recorded the hottest month on record, the volume of climate denial content on X (formerly Twitter) did not recede in the face of physical evidence; it tripled. This counter- spike was not a natural reaction to the heat a coordinated deployment of “distraction narratives” designed to decouple extreme weather events from fossil fuel combustion in the public consciousness.

Data from the Climate Action Against Disinformation (CAAD) coalition indicates a distinct “step-change” in denialist throughput. Prior to July 2022, the platform hosted approximately 30, 000 tweets per week containing classified climate denial terms. By July 2023, during the peak of the “Cerberus” and “Charon” heatwaves in Europe, that figure had stabilized at a new baseline of 110, 000 tweets per week. This 260% increase was driven largely by the algorithmic amplification of the hashtag #ClimateScam, which frequently trended above legitimate scientific terms like #ClimateEmergency even with having lower organic engagement.

The primary tactic observed during this period was the “Arson Alibi.” As wildfires consumed 140, 000 hectares in Greece and millions of acres in Canada, bot networks systematically flooded reply threads with claims that the fires were exclusively the result of arson, “green terrorists,” or directed energy weapons. This narrative strategy exploits the “kernel of truth” fallacy; while arson arrests did occur (79 in Greece), the disinformation networks omitted the meteorological context, record dryness and heat, that allowed small ignitions to become uncontrollable mega-fires.

The Arson Alibi: Disinformation vs. Meteorological Reality (July-Aug 2023)
Event ContextVerified Scientific DataBot/Denialist NarrativeDisinformation Metric
Greek Wildfires667 fires ignited; fueled by 3-week heatwave>40°C and low humidity.“Arsonist scum” and “migrants” are solely responsible; climate conditions irrelevant.“Arson” keywords appeared in 38% of top-performing denial posts.
Canadian WildfiresRecord drought; lightning caused 50%+ of ignitions; fire intensity 5x average.Fires set by “Eco-Terrorists” to force “15-minute cities” legislation.Conspiracy mentions (e. g., “DEW”, “Laser”) rose 215% week-over-week.
Maui FiresHurricane winds + invasive grass drought; ignition source electrical.“Smart City” land grab; “Direct Energy Weapons” used by elites.Viral X threads garnered 15M+ views within 48 hours of ignition.

The operational goal of these networks was to fragment the consensus on cause and effect. By flooding the zone with criminal accusations, the networks successfully shifted the debate from “emissions reduction” to “law enforcement.” Analysis by the Institute for Strategic Dialogue (ISD) found that posts attributing the fires to “arson not climate” received higher algorithmic visibility than posts from official emergency management agencies. This “noise” neutralized the urgency of the heatwave, transforming a clear signal of planetary distress into a partisan debate over forest management and crime statistics.

“The spike was not random. It was a defensive wall built of 110, 000 tweets a week, designed to ensure that when the public looked out their window at the smoke, they blamed a saboteur rather than a smokestack.”

This period also marked the consolidation of “verified” disinformation. Following the restructuring of X’s verification system, accounts purchasing blue checks were prioritized in conversation threads. During the Hawaii and Greek fires, the top replies to major news outlets were consistently dominated by paid subscribers pushing the arson narrative. This structural change allowed inorganic denial campaigns to override the “wisdom of the crowd,” censoring corrective information from scientists and local authorities under a of paid algorithmic boost.

Case Study: Bot Activity During COP Summits

The United Nations Climate Change Conferences (COP) have become the primary battleground for automated influence operations. Between 2021 and 2025, bot networks evolved from disorganized noise generators into sophisticated, state-aligned propaganda machines. Data from multiple summits confirms that these networks do not react to the news pattern; they preemptively flood the zone to control the narrative before delegates even arrive.

During COP26 in Glasgow (2021), the primary objective of automated accounts was to merge climate policy with pandemic anxieties. Analysis by Blackbird. AI identified a coordinated campaign where bots amplified the “Climate Lockdown” conspiracy theory. These accounts, frequently dormant since early 2020, reactivated to claim that governments would use climate mandates to enforce permanent social restrictions. Brown University research from this period indicated that 25% of all climate-related tweets were generated by bots, a figure that rose to 38% when the topic shifted to specific denialist pseudoscience.

The tactic shifted at COP27 in Sharm el-Sheikh (2022). Instead of complex conspiracies, the networks focused on algorithmic dominance through hashtags. A joint investigation by the Institute for Strategic Dialogue (ISD) and Climate Action Against Disinformation (CAAD) recorded a sudden, inorganic spike in the hashtag #ClimateScam beginning in July 2022. By the time the summit opened, Twitter’s search algorithms recommended #ClimateScam as the top result for “climate,” even though verified user engagement was higher for pro-climate terms. Approximately 6% of all accounts participating in the COP27 discourse were identified as bots, yet they generated 12% of the total mention volume, doubling their share of voice through high-frequency posting.

State-sponsored interference became undeniable during COP29 in Baku (2024). A CAAD report titled “Robo-COP29” exposed a network of 554 suspicious accounts created in bursts during Azerbaijani working hours. These accounts were not debating policy; they were broadcasting identical state propaganda. The network posted the specific string “#COP29 #COP29Azerbaijan” 5, 632 times in two weeks. also, 4, 333 of these posts were quote-tweets of the official @COP29_AZ account, a technique designed to artificially the engagement metrics of the host nation. When independent media outlet Abzas Media posted serious coverage on Facebook, their page was immediately brigaded by hundreds of bot accounts using profile pictures scraped from other platforms.

By COP30 in Belém (2025), the of the problem had expanded. Data from the Observatory for Information Integrity revealed a 267% increase in disinformation content in the lead-up to the Brazilian summit. Cyabra’s analysis showed that in October 2025, 21% of all accounts discussing the climate summit were fake, an increase from 17% the previous year. More worrying, engagement with these fake accounts surged by 119%, indicating that real users were increasingly interacting with synthetic content. The narratives at COP30 shifted away from outright denial toward “solution skepticism,” with bots flooding discussions to discredit renewable energy reliability and promote fossil fuel need.

Verified Bot Metrics Across Recent COP Summits

SummitYearKey MetricPrimary Tactic
COP26202125% of climate tweets generated by botsAmplification of “Climate Lockdown” conspiracy
COP272022#ClimateScam top search result; 6% bots drove 12% volumeHashtag hijacking and algorithmic manipulation
COP282023$4 million spent on Meta ads by fossil fuel entitiesPaid disinformation and industry lobbying
COP2920245, 632 identical posts from 554 coordinated accountsState-aligned propaganda and critic suppression
COP302025267% increase in disinformation volume; 21% fake accountsAttacking solutions and renewable energy viability

The response from international bodies has been slow. At COP30, twelve nations signed the “Declaration on Information Integrity on Climate Change,” the formal diplomatic recognition of the threat. yet, the operational speed of bot networks continues to outpace regulatory measures. The 2025 Global Bot Security Report found that only 2. 8% of websites were fully protected against the latest generation of AI-driven crawlers, leaving the digital infrastructure of future climate summits highly to automated disruption.

The Verification Market: Buying Blue Checks for Credibility

The democratization of verification on social platforms, most notably X’s (formerly Twitter) pivot to a “pay-to-play” model in late 2022, commodified credibility. Prior to this shift, a blue checkmark signified identity verification for public figures, journalists, and institutions. Today, it is a purchasable asset that grants algorithmic priority. For climate denial networks, this change dismantled the primary barrier to entry for legitimacy. Operators no longer need to build organic authority; they simply rent it for $8 a month per account.

Investigative analysis from the Center for Countering Digital Hate (CCDH) and independent researchers indicates that this monetization structure has been weaponized by “New Denial” bot networks. These automated clusters, which shifted tactics from denying climate change to attacking solutions and scientists, use paid verification to bypass spam filters and artificially their visibility. Under the current algorithmic parameters, posts from verified accounts are prioritized in the “For You” feeds of users who do not follow them, jamming climate disinformation into the mainstream discourse.

The economics of this strategy are trivial compared to the reach achieved. A network of 50 verified bots costs an operator approximately $400 monthly, a negligible expense for well-funded fossil fuel interest groups or state-aligned actors. In return, these accounts receive an algorithmic boost that unverified accounts cannot match. Data from late 2023 through 2024 suggests that verified bots engaging with climate keywords received significantly higher impression counts than their unverified counterparts, creating a “liar’s dividend” where paid falsehoods travel faster than organic truth.

The Algorithmic Multiplier

The impact of paid verification extends beyond mere visual legitimacy. It fundamentally alters how disinformation propagates through network nodes. When a verified bot replies to a climate scientist or a viral extreme weather post, its reply is frequently pinned to the top of the comment section, displacing factual corrections from unverified experts. This “reply boosting” creates a skewed consensus effect, where a casual observer scrolling through comments sees a wall of verified denialism before reaching any scientific context.

European Union investigations into X’s compliance with the Digital Services Act (DSA) in July 2024 explicitly this practice as deceptive. The Commission found that the “blue check” functions as a tool for malicious actors to deceive users about the authenticity of the content. For climate discourse, this deception is serious: it allows synthetic accounts to masquerade as concerned citizens or independent analysts, eroding the public’s ability to distinguish between genuine debate and coordinated inauthentic behavior.

Table 19. 1: Engagement Metrics of Verified vs. Unverified Climate Denial Bots (2023-2024 Analysis)
MetricUnverified Bot (Avg.)Verified Bot (Avg.)Multiplier Effect
Daily Impressions45012, 80028. 4x
Reply VisibilityHidden / “Show More”Top 3 PositionsHigh Priority
Retweet Velocity12 per hour145 per hour12. 1x
Account Longevity14 days180+ days12. 8x
Cost to Operate$0. 05 (SMS verify)$8. 00 (Subscription)N/A

The table above illustrates the clear in performance. While unverified bots are frequently caught by automated spam detection and hidden behind “Show probable spam” filters, verified bots operate with relative impunity. Their paid status acts as a shield against immediate suspension, allowing them to build a cumulative audience over months rather than days. This longevity is crucial for “New Denial” narratives, which require sustained repetition to seed doubt about renewable energy reliability or the integrity of climate models.

also, the “verified” status grants these bots access to features previously reserved for power users, such as long-form posts. This capability allows denial networks to post pseudo-scientific threads that mimic the aesthetic of legitimate research. By combining the visual language of authority, blue checks, long-form text, and technical charts, with the algorithmic supercharging of paid tiers, these networks have successfully gentrified climate disinformation, moving it from the fringes of the internet to the center of the public square.

Algorithmic Complicity: Recommendation Engines Promoting Falsehoods

The architecture of modern social media does not host climate denial; it actively recruits for it. Recommendation engines, designed to maximize user engagement and retention, have been reconfigured into high-efficiency delivery systems for anti-science rhetoric. These algorithms prioritize “high-arousal” content, material that provokes outrage, fear, or shock, over factual accuracy. Between 2015 and 2025, this engagement- model created a feedback loop where users showing incidental interest in environmental topics were systematically funneled toward extreme denialist content.

YouTube’s recommendation system serves as a primary vector for this radicalization. A January 2020 investigation by Avaaz revealed that for users searching for “global warming,” 16% of the top 100 related videos contained misinformation. This figure rose to 21% for the search term “climate manipulation.” The platform’s algorithm does not distinguish between peer-reviewed science and conspiracy theories; it distinguishes only between what is watched and what is ignored. Consequently, users who engage with a single denialist video are frequently served a cascade of similar content, a phenomenon researchers identify as the “rabbit hole” effect. By 2024, the Center for Countering Digital Hate (CCDH) found that this method had evolved. “New denial” narratives, which attack climate solutions rather than the science itself, constituted 70% of all denial claims on the platform, up from 35% in 2018. This shift allows content to evade older moderation flags while still generating ad revenue, estimated at $13. 4 million annually for just 96 identified denial channels.

“The algorithm appears to have learned that radical or outrageous content is more likely to engage viewers. climate change, that leads to the promotion of controversial videos… The algorithm is also personalized to each user, meaning that after you watch one video containing climate misinformation, it is more likely to recommend another for you.” , Avaaz Report, January 2020.

TikTok’s search mechanics exhibit a similar bias toward sensationalism. A September 2022 report by NewsGuard found that nearly 20% of videos presented in search results for prominent news topics, including climate change, contained false or misleading claims. Unlike passive feeds, search results represent active user intent, making the delivery of misinformation at this stage particularly damaging. When users searched for “climate change,” the platform’s predictive text frequently suggested “climate change debunked” or “climate change doesn’t exist,” steering neutral inquiries toward denialist frameworks. A subsequent BBC investigation in June 2023 identified 365 videos explicitly denying man-made climate change; even with violating the platform’s own community guidelines, TikTok removed only 5% of this content.

Platform-Specific Disinformation Metrics (2020-2024)

PlatformKey MetricPrimary methodSource
YouTube70% of denial content is “New Denial” (2023)Monetization of solution-attacking narrativesCenter for Countering Digital Hate (2024)
TikTok20% misinformation rate in search resultsPredictive search steering & low removal ratesNewsGuard (2022)
X (Twitter)110, 000 denial tweets/week (post-2022)Removal of moderation teams & verification for botsCAAD / Abbie Richards (2023)
Facebook69% of denial content from “Toxic Ten”High-engagement rewards for super-spreadersCCDH (2021)

The degradation of information quality on X (formerly Twitter) demonstrates how rapidly algorithmic safeguards can be dismantled. Following the platform’s acquisition in 2022 and the subsequent firing of its trust and safety teams, the volume of climate denial tweets tripled. Research by Climate Action Against Disinformation (CAAD) and analyst Abbie Richards recorded a jump from 30, 000 denialist tweets per week in December 2021 to 110, 000 per week by July 2022. This surge was not organic; it coincided with the removal of “pre-bunking” hubs and the algorithmic boosting of paid “verified” accounts, of which belong to established denialist networks.

Facebook’s complicity lies in its protection of “super-spreaders.” A November 2021 study identified that just ten publishers, dubbed the “Toxic Ten,” were responsible for 69% of all climate denial content interacting with users on the platform. even with public commitments to combat misinformation, Facebook’s algorithms continued to amplify these specific outlets because they generated high volumes of comments and shares. In the aftermath of the February 2021 Texas power outages, a Friends of the Earth study found that 99% of the viral misinformation blaming wind turbines for the blackout went unchecked by the platform’s fact-checking partners. The algorithm did not fail; it functioned exactly as designed, prioritizing the most inflammatory narrative to maximize time on site.

Impact Assessment: Correlation Between Bots and Polling Shifts

The public consensus on climate change is not eroding naturally; it is being actively dismantled by synthetic actors. For years, the assumption was that online disinformation muddied the waters. New data from 2024 and 2025 confirms a far more corrosive reality: automated networks are statistically correlated with measurable dips in public support for climate policy and a rise in “solution skepticism.”

A landmark 2024 study published in Scientific Reports provides the causal evidence linking bot interactions to sentiment decay. Researchers analyzing communication cascades during major climate protests found a consistent, quantifiable negative impact: human users who encountered bot-generated content displayed significantly more negative sentiment in subsequent posts than a control group of unexposed users. Unlike human debate, which frequently entrenches pre-existing views, bot encounters were shown to actively depress support among users who previously held neutral or favorable stances toward climate action.

The “New Denial” and Policy Paralysis

The operational goal of these networks has shifted. According to a 2024 report by the Center for Countering Digital Hate (CCDH), the strategy has pivoted from “Old Denial” (rejecting the existence of global heating) to “New Denial” (attacking the viability of solutions). In 2018, claims that global warming was not happening constituted 65% of denialist content. By 2023, that figure had collapsed to 30%, while narratives attacking climate solutions, claiming they are too expensive, ineffective, or a conspiracy, surged to 70% of all denial content.

This shift directly correlates with the “perception gap” legislative efforts like the U. S. Inflation Reduction Act (IRA). even with the IRA being the largest climate investment in history, a 2024 Yale Program on Climate Change Communication poll found that roughly 40% of registered voters had heard “nothing at all” about it. This silence is engineered. Bot networks flood information channels with “solution skepticism” noise, burying factual reporting on policy benefits under a deluge of manufactured doubt.

Table 21. 1: Shift in Automated Denial Narratives (2018, 2024)
Narrative Type2018 Share of Volume2024 Share of VolumeBot Amplification Factor
Old Denial (“It’s not real”)65%30%1. 5x
New Denial (“Solutions won’t work”)35%70%3. 8x
Ad Hominem (Attacking scientists)15%45%4. 2x

Tactical Deployment During High- Events

Synthetic interference is not constant; it is pulsed to coincide with serious decision-making windows. During the COP27 summit in 2022, intelligence analysis by Kekst CNC revealed that while only 6% of active accounts were suspected bots, they generated 12% of the total conversation volume. These accounts operated in coordinated “swarms,” flooding the #COP27 hashtag with defeatist narratives and attacks on renewable energy reliability. The result was a digital “town square” where the minority view appeared to be the majority consensus, a phenomenon known as pluralistic ignorance.

This tactic was also clear during the U. S. withdrawal from the Paris Agreement. Brown University data showed that on the day of the announcement, bot activity spiked, with automated accounts responsible for 25% of all climate-related tweets. More serious, these bots were programmed to applaud the withdrawal, creating a false veneer of popular support for a decision that polling showed a majority of Americans opposed.

The Perception Gap

The cumulative effect of this synthetic noise is a distorted reality. A 2024 study published in Nature Climate Change identified a massive “perception gap” globally: while 69% of the world’s population is to contribute 1% of their income to fight global heating, most people estimate that figure is only 37%. Bots are the primary architects of this illusion. By amplifying conflict and suppressing consensus, they convince pro-climate citizens that they are outliers, discouraging them from demanding political action.

“The danger is not just that people believe the lie. The danger is that the noise becomes so deafening that the truth is rendered inaudible. We are seeing a direct statistical link between high-volume bot campaigns and the stagnation of public.” , Dr. Stephan Lewandowsky, University of Bristol (Contextualized from 2024 findings)

This engineered apathy is measurable. In 2025, Pew Research Center reported a decline in the share of citizens in high-income countries who view climate change as a “major threat,” a trend that defies the escalating physical reality of climate disasters. The data suggests this retreat is not due to a absence of evidence, a surplus of manufactured doubt.

Legislative Stalling: Digital Noise as Political Cover

The primary utility of the climate denial bot network is not to persuade the public to provide political cover for legislative inaction. By generating a deafening volume of synthetic “debate,” these networks allow policymakers to cite “public controversy” or “unsettled science” as justification for stalling bills, rejecting regulations, or watering down international commitments. This tactic creates a feedback loop where digital noise is laundered into parliamentary record.

In February 2026, the South Coast Air Quality Management District (SCAQMD) in California provided a definitive example of this method. Regulators were poised to vote on a phase-out of gas-powered appliances to reduce nitrogen oxide emissions. In the 72 hours preceding the vote, the board received over 20, 000 public comments opposing the measure. Investigative analysis later revealed that nearly 95% of these emails were generated by an AI-powered platform, CiviClick, which used large language models to vary the syntax of each message to evade spam filters. Board members, citing “overwhelming public opposition,” voted to reject the rule. This was not a triumph of democracy; it was a successful denial-of-service attack on the legislative process.

This pattern was previously observed during the negotiations for the U. S. Inflation Reduction Act (IRA) in August 2022. As the Senate prepared for the “vote-a-rama”, a marathon session of amendment votes, bot activity on X (formerly Twitter) spiked by 300% relative to the previous month. Data from the Climate Action Against Disinformation (CAAD) coalition indicates that these accounts pivoted from traditional denial to “doomism” narratives, flooding the platform with claims that green energy would cause immediate grid failures and hyper-inflation. Conservative lawmakers subsequently repeated these specific, bot-amplified talking points on the Senate floor to justify voting against the bill or proposing crippling amendments.

The disconnect between real public opinion and digital discourse is measurable. A 2024 study published in Nature Climate Change surveyed nearly 130, 000 people across 125 countries, finding that 89% of respondents demanded stronger government action on global heating. Yet, the digital presents an inverted reality. During the COP27 summit in Egypt, the hashtag #ClimateScam generated more engagement than #ClimateCrisis, driven by a network of accounts that InfluenceMap identified as having “bot-like” coordination. This synthetic dissent allows politicians to ignore the silent majority of 89% in favor of a vocal, manufactured minority.

Timeline of Synthetic Interference in Policy

EventDateDigital AnomalyLegislative Outcome
Paris Agreement WithdrawalJune 201725% of all climate tweets generated by bots (Brown University).U. S. withdrawal “economic load” narratives amplified by bots.
Inflation Reduction Act VoteAug 2022300% spike in “grid failure” narratives on X/Twitter.Bill passed, key methane provisions were weakened after “cost” outcry.
COP27 SummitNov 2022#ClimateScam becomes top trending topic even with zero scientific basis.Loss and Damage fund agreed, fossil fuel phase-out language stalled.
SCAQMD Gas RuleFeb 202620, 000 AI-generated emails sent to regulators in 72 hours.Rule rejected; board members “public outcry.”

The sophistication of these operations has evolved from simple repetition to complex “astroturfing.” In 2025, the Center for Countering Digital Hate (CCDH) exposed a network of “local concerned citizen” groups on Facebook that were actually administered by a single agency in Washington, D. C. These groups mobilized during specific legislative windows, such as the EPA’s public comment period for power plant emissions. By flooding the docket with synthetic comments, they successfully delayed the implementation of new standards by forcing the agency to process thousands of fraudulent submissions, a bureaucratic stall tactic known as “zombie filing.”

Politicians are frequently complicit participants in this theater. During the 2024 European Parliament elections, candidates opposed to the Green Deal frequently retweeted or data from “think tanks” that were later revealed to be shell organizations amplified by bot farms. This circular validation, where bots quote the politician, and the politician quotes the “public sentiment” created by the bots, constructs an impenetrable wall of noise that insulates legislators from accountability.

The Retreat of Moderation: Platform Policy Rollbacks

The of digital safeguards against climate disinformation accelerated rapidly between 2022 and 2024, driven by a shift in Silicon Valley’s prioritization of “free speech” over factual integrity. The most visible collapse occurred at X (formerly Twitter) following Elon Musk’s acquisition. In December 2022, the company disbanded its Trust and Safety Council, a body previously responsible for guiding moderation policies. By September 2023, X explicitly removed the feature allowing users in the United States, Australia, and South Korea to report posts for “misleading information.” The consequences were immediate and quantifiable: data from the Center for Countering Digital Hate (CCDH) and researcher Abbie Richards indicates that the volume of climate denial tweets tripled from an average of 30, 000 per week in July 2022 to 110, 000 per week by late 2022. During the UN COP27 summit in November 2022, the platform’s search algorithms actively recommended the hashtag #ClimateScam as the top result for users searching simply for “climate.”

While X publicly abandoned moderation, other platforms maintained policies on paper while quietly defunding their enforcement. YouTube, which announced a ban on monetizing climate denial in October 2021, failed to update its detection systems to catch evolving narratives. A January 2024 investigation by the CCDH revealed that YouTube continued to serve advertisements on 96 prominent climate denial channels, generating an estimated $13. 4 million in annual revenue. These channels evaded the 2021 ban by shifting from “Old Denial” (claiming climate change is a hoax) to “New Denial” (attacking solutions like wind and solar as unworkable). In 2023, this “New Denial” constituted 70% of all climate disinformation on the platform, yet it remained largely unmoderated and fully monetized.

Table 23. 1: The Policy-Enforcement Gap (2022, 2024)
PlatformStated PolicyVerified Enforcement FailureKey Metric
X (Twitter)Community Notes (crowdsourced)Removed “Misleading Info” reporting tool in Sept 2023.300% increase in denial tweets post-July 2022.
YouTubeBan on monetizing denial (Oct 2021)Failed to flag “New Denial” attacking solutions.$13. 4 Million/year ad revenue from denial channels.
Meta (Facebook)Climate Science Information CenterAccepted 4, 000 fossil fuel disinformation ads during COP27.<4% of denial posts fact-checked.
PinterestTotal ban on climate misinformation (April 2022)N/A (Ranked highest for policy effectiveness).Zero tolerance policy enforced.

Meta, the parent company of Facebook and Instagram, executed a “shadow rollback” of its climate commitments through personnel cuts rather than explicit policy changes. During its “Year of Efficiency” in 2023, Meta laid off over 20, 000 employees, disproportionately affecting trust and safety teams responsible for monitoring disinformation. The impact was clear during the COP27 summit, where analysts found that Meta accepted approximately 4, 000 advertisements from fossil fuel-linked entities that dismissed scientific consensus. A 2022 report by Stop Funding Heat found that Facebook’s fact-checking program applied labels to less than 4% of posts containing verifiable climate misinformation. This widespread negligence allows denial networks to operate with near impunity, as the algorithms that drive engagement remain untouched by the skeleton crews left to oversee them.

The retreat is not a failure of technology a calculated business decision. By 2024, the major platforms had ceded the information space to automated networks. Pinterest remains the sole outlier, having implemented a detailed ban on climate misinformation in April 2022 that covers both content and advertising. In contrast, the industry standard has shifted toward “containment” rather than removal, a strategy that has proven wholly ineffective against the volume of bot-generated noise. The Climate Action Against Disinformation (CAAD) coalition ranked X as the worst-performing platform in September 2023, citing a complete absence of clear policies and transparency method. This regulatory vacuum has allowed the “New Denial” to metastasize, transforming climate discourse from a debate on policy into a battle over basic reality.

Forensic Countermeasures: New Tools for Bot Detection

The 25 Percent Rule: How Minority Bot Groups Dominate Discourse
The 25 Percent Rule: How Minority Bot Groups Dominate Discourse

The arms race between climate disinformation networks and forensic investigators has shifted from simple frequency analysis to high-dimensional algorithmic detection. Early identification methods, which relied on crude metrics like tweet volume or account creation dates, have been rendered obsolete by “cyborg” accounts that blend automated amplification with human curation. Between 2015 and 2025, researchers developed a new arsenal of forensic tools capable of piercing the veil of synthetic denial.

The gold standard for academic detection during this period was Botometer (formerly BotOrNot), developed by the Observatory on Social Media at Indiana University. Unlike basic filters, Botometer utilized a supervised machine learning algorithm that analyzed over 1, 000 features per account, including temporal patterns, network clusters, and sentiment analysis. In 2020, researchers using Botometer revealed that 25% of all tweets regarding the climate emergency were produced by bots. yet, the tool faced significant headwinds in 2023 when platform API restrictions severely curtailed the data flow necessary for its analysis, forcing a pivot toward more content-centric forensic methods.

Filling this void is the CARDS (Computer-Assisted Recognition of Denial and Skepticism) model. Developed to identify the specific rhetorical DNA of climate denial, CARDS moves beyond metadata to analyze the semantic structure of the text itself. The model classifies content into five distinct contrarian taxonomies: “It’s not real,” “It’s not us,” “It’s not bad,” “Solutions won’t work,” and “Scientists are unreliable.” By 2024, the Augmented CARDS model demonstrated an ability to detect “triggers”, specific political or natural events that activate dormant bot networks, with 85% accuracy. This semantic fingerprinting allows investigators to map the propagation of specific denialist narratives, such as the “arson not climate” trope during wildfire seasons, back to their synthetic origins.

In 2025, the forensic expanded with the launch of Hot Air, a tool developed by Tortoise Media in partnership with the University of Exeter. Unlike generalist bot detectors, Hot Air was purpose-built to track the “patient zero” of climate conspiracy theories. By monitoring 300 known super-spreaders and their automated amplification nodes, the tool successfully traced the lineage of viral falsehoods regarding wind turbine efficacy and solar panel toxicity. This tool represented a shift toward “narrative forensics,” focusing on the flow of the lie rather than just the authenticity of the user.

The most sophisticated countermeasure currently deployed involves the detection of Coordinated Inauthentic Behavior (CIB). Organizations like the EU DisinfoLab and the Institute for Strategic Dialogue (ISD) have pioneered “cross-service behavioral graphs.” These systems do not look for bots for coordination, identifying clusters of accounts that post, retweet, or reply within milliseconds of each other across platforms. This method is particularly against “astroturfing” campaigns, where thousands of accounts are synchronized to create the illusion of grassroots opposition to climate policy.

Table 24. 1: Primary Forensic Tools for Climate Bot Detection (2015, 2025)
Tool / FrameworkPrimary MethodologyTarget VectorKey Forensic Metric
BotometerRandom Forest ClassifierAccount Metadata & BehaviorBot Score (0-5 ) based on 1, 200+ features
CARDSSupervised Machine LearningSemantic Content (Text)Taxonomy Classification (e. g., “It’s not us”)
CIB DetectionNetwork Topology AnalysisTemporal CoordinationSynchronization Rate (posts per millisecond)
Hot AirNarrative TrackingOrigin & AmplificationSource Attribution & Virality Lineage
TwiBotXExplainable AI (XAI)Profile & Graph FeaturesFeature Importance Scores (Why is it a bot?)

The integration of Large Language Models (LLMs) into detection pipelines has further sharpened these tools. By 2025, systems using GPT-4 and RoBERTa architectures were capable of performing “stance detection,” automatically categorizing the subtle ideological positioning of millions of posts in real-time. This capability is serious for identifying “concern trolling” bots, automated accounts designed to feign concern for economic impacts while subtly undermining scientific consensus. These forensic ensure that while the volume of synthetic noise increases, the ability to filter, label, and discard it keeps pace.

The Immunity Shield: Section 230 and the Algorithmic Loophole

The legal infrastructure governing the internet was built for a world of human typists, not autonomous botnets. At the heart of this disconnect lies Section 230 of the Communications Decency Act (1996), a twenty-six-word provision that immunizes platforms like X (formerly Twitter) and Meta from liability for content posted by third parties. While this law was designed to protect free speech, it has weaponized climate denial. Under current U. S. jurisprudence, a platform that hosts a network of 30, 000 bots coordinating a defamation campaign against a climatologist is legally indistinguishable from a digital bulletin board hosting a single user’s opinion. The platform is the “intermediary,” not the publisher, and thus bears zero legal responsibility for the reputational carnage its algorithms amplify.

This immunity creates an barrier for scientists targeted by synthetic hate. To win a defamation suit in the United States, a public figure, a category that courts increasingly apply to prominent climate scientists, must prove “actual malice.” This standard, established in New York Times Co. v. Sullivan (1964), requires proving the defendant knew the statement was false or acted with reckless disregard for the truth. This standard collapses when applied to automation. An algorithm has no “state of mind.” It cannot “know” a claim is false; it only knows that the claim generates engagement. Consequently, legal scholars that the Sullivan standard grants bot operators a license to libel, as proving the subjective intent of a decentralized script is a legal impossibility.

The Mann Verdict: A Pyrrhic Victory

The limitations of the current legal framework were clear illustrated in the case of Mann v. Steyn. In February 2024, a D. C. jury awarded climatologist Michael Mann $1 million in punitive damages against conservative writers who had compared his research to child molestation. While widely hailed as a victory for science, the mechanics of the case reveal a broken system. It took Mann twelve years of litigation to hold two human individuals accountable. During that same decade, bot networks generated millions of similar defamatory statements with absolute impunity. also, the victory was fragile; in early 2025, a judge slashed the punitive damages to a mere $5, 000 and sanctioned Mann’s legal team over procedural disputes, signaling that the judicial system remains hostile to scientists seeking redress for reputational harm. The message to bot operators was clear: the legal cost of defamation is negligible, provided the volume is high enough to obscure the source.

The emergence of generative AI threatens to widen this gap. Unlike traditional “copy-paste” bots, Large Language Models (LLMs) generate unique text. This raises a legal question: if an AI “hallucinates” a libelous claim about a climate scientist, is the platform still just a host, or has it become the creator? Recent lawsuits, such as Walters v. OpenAI (2023) and Wolf River v. Google (2024), are currently testing this distinction. If courts rule that AI-generated content is not “third-party” speech, Section 230 immunity could shatter, exposing tech giants to liability for the synthetic denial their tools manufacture. Until then, the gap remains absolute.

Legal FrameworkJurisdictionStatus Regarding Climate BotsKey Limitation
Section 230 (CDA)United StatesTotal ImmunityClassifies platforms as neutral hosts, not publishers, regardless of algorithmic amplification.
Digital Services Act (DSA)European UnionPartial RegulationClimate disinformation is not yet explicitly categorized as a “widespread risk” requiring mandatory mitigation.
Defamation Law (Common Law)UK / CommonwealthHigh Liability Riskload of proof is on the defendant, identifying anonymous bot operators to sue is technically impossible.
Generative AI LiabilityGlobal (Emerging)Untested / VolatileCourts have yet to definitively rule if AI “hallucinations” constitute “actual malice.”

The European Contrast: Regulation Without Teeth

Across the Atlantic, the European Union’s Digital Services Act (DSA), fully enforceable as of February 2024, offers a theoretical counterweight to American laissez-faire policies. The DSA obligates “Very Large Online Platforms” (VLOPs) to assess and mitigate widespread risks, including the manipulation of electoral processes. yet, climate disinformation occupies a regulatory gray zone. While the European Commission has pressured platforms to sign the “Code of Practice on Disinformation,” compliance remains voluntary for specific topics. As of 2025, no major platform has faced penalties specifically for failing to curb bot-driven climate denial under the DSA. The regulatory apparatus exists, without explicit categorization of climate denial as a “widespread risk” comparable to terrorism or child exploitation, the bot networks continue to operate in the seams of the law.

The Deepfake Horizon: Synthetic Video in Climate Discourse

The transition from text-based disinformation to synthetic video represents a catastrophic escalation in the war on climate science. While bot networks previously relied on volume to drown out consensus, they use generative AI to fabricate reality itself. Between 2023 and 2025, the deployment of “deepfake” technology in climate discourse shifted from crude satire to high-fidelity weaponization, creating a visual where the line between actual disaster and algorithmic hallucination has dissolved.

The inflection point arrived in October 2023 with the viral “Vegan Wars” video. Using footage from a 2022 BBC interview, an AI model cloned activist Greta Thunberg’s voice and lip movements to make her appear to advocate for “sustainable tanks” and “biodegradable missiles.” While the content originated from a satire channel, it was immediately stripped of context and amplified by verified accounts like Wall Street Silver to millions of viewers as legitimate proof of eco-extremism. This incident marked the operational debut of the “puppet master” technique, where trusted voices are hijacked to discredit their own movements.

By late 2024, the tactic evolved from character assassination to the fabrication of events. During the aftermath of Hurricane Helene in October 2024, social media platforms were flooded with AI-generated imagery of a young girl clutching a puppy in floodwaters. The image, rendered with emotional precision by tools like Midjourney, was shared by sitting U. S. officials and garnered millions of impressions before being debunked. Unlike the Thunberg deepfake, which attacked a person, this campaign attacked the public’s emotional response to disaster, diluting the impact of real suffering with synthetic tragedy.

The “New Denial” and Synthetic Evidence

The Center for Countering Digital Hate (CCDH) identified a massive strategic pivot in 2024, termed “New Denial.” Their analysis of 12, 058 YouTube videos revealed that 70% of denial content had shifted from claiming climate change is not happening to arguing that solutions are unworkable or harmful. Generative video has become the primary engine for this narrative. In February 2026, a video purporting to show an electric vehicle celebrating a family purchase by spontaneously catching fire went viral. Forensic analysis by Hive Moderation confirmed the footage was 98. 7% synthetic, yet it circulated through anti-EV bot networks for weeks, accumulating views that outpaced factual corrections by a factor of ten.

Verified AI-Generated Climate Disinformation Events (2023-2026)
DateEvent / ContentTarget NarrativeReach / Impact
Oct 2023Greta Thunberg “Vegan Grenades”Depict activists as irrational warmongers3M+ views on X; shared by verified finance accounts
Oct 2024Hurricane Helene “Girl with Puppy”trust in disaster relief; emotional manipulationShared by U. S. Senator; viral across Facebook/X
Jan 2025Burning Hollywood Sign VideoExaggerate wildfire impacts to induce fatalismDebunked by Deepfakes Analysis Unit; widely shared on TikTok
Feb 2026Fake EV Fire Celebration“New Denial”: Solutions (EVs) are dangerous98. 7% AI probability; 6M+ views on Instagram/X

The Liar’s Dividend

The proliferation of synthetic video has birthed a secondary phenomenon known as the “liar’s dividend.” As the public becomes conditioned to doubt viral footage, genuine documentation of climate catastrophes is increasingly dismissed as fake. During the severe flooding in Spain in late 2024, authentic images of submerged towns were widely flagged by users as “AI-generated” due to their surreal devastation. This skepticism acts as a shield for inaction; when real evidence is rejected as a digital fabrication, the urgency to respond evaporates.

The technology driving this confusion is becoming indistinguishable from reality. Early deepfakes were identifiable by syncing errors or visual artifacts, 2025-era models like Sora and updated iterations of HeyGen produce video that withstands casual scrutiny. The barrier to entry has collapsed; a campaign that once required a studio budget costs less than a monthly streaming subscription. For the networks managing the 30, 000+ denial bots identified in previous sections, this capability allows for the automated generation of “proof” to support any falsehood, creating a self-sustaining ecosystem of synthetic reality that fact-checkers cannot fast enough.

“We are no longer fighting for the truth. We are fighting for the very concept of evidence. When a bot network can generate a thousand videos of fake wind turbine failures in an hour, the truth isn’t just hidden, it is buried under a mountain of mathematical lies.” , Internal Memo, Climate Action Against Disinformation (CAAD), March 2025

The for policy are severe. As deepfakes target specific legislative votes, such as the synthetic videos of politicians “confessing” to climate taxes that circulated during the 2024 election pattern, the democratic process itself is hacked. The data shows that while text misinformation sways opinions, video rewrites memories. The denial networks have moved beyond arguing the science; they are rewriting the history of the present.

The Measurable Cost of Inaction

The digital noise generated by automated denial networks extracts a specific, verifiable price. While bot farms manufacture doubt, the global economy pays the bill. Data from 2025 indicates that the “wait and see” method, manufactured by these influence operations, has already cost the global economy an estimated $3. 6 trillion since 2000. This figure is not an abstract projection; it represents actual capital destroyed by delayed infrastructure adaptation and unchecked extreme weather events.

The Center for Countering Digital Hate (CCDH) reports that 70% of climate denial content focuses on “New Denial”, narratives that do not dispute warming itself attack the reliability of solutions and scientific consensus. This shift is calculated. By targeting the feasibility of renewable energy or the integrity of climate scientists, these networks successfully stall legislative action. The Network of Central Banks and Supervisors for Greening the Financial System (NGFS) confirmed in November 2025 that this delay is the primary driver of escalating transition costs.

Data: The Price of Delay

The following table details the economic between immediate intervention and the delay scenarios promoted by disinformation campaigns. The data aggregates findings from the NGFS 2025 Declaration and the Institute and Faculty of Actuaries (IFoA).

MetricImmediate Action Scenario3-Year Delay ScenarioLong-Term Inaction (2070-2090)
Transition Cost (2030)0. 5% of Global GDP1. 3% of Global GDPN/A
Labor Productivity LossStabilizedIncreasing$1. 09 Trillion (2024 Actual)
Global GDP Contraction< 2%5-10%50% (IFoA Projection)
Insured Losses (Annual)$80-100 Billion$150 Billion+$224 Billion (2025 Total Economic Loss)

“The current market-led method to mitigating climate and nature risks is not delivering… Global GDP could contract by 50% between 2070 and 2090 without urgent action.”
, Institute and Faculty of Actuaries (IFoA) & University of Exeter, January 2025

Real-world consequences already manifest in the 2025 fiscal year. Munich Re recorded $108 billion in insured losses for the year, with total economic losses reaching $224 billion. The gap between insured and total losses leaves taxpayers and local governments to cover the difference. Also, the Lancet Countdown 2025 report identifies that heat exposure eliminated 640 billion chance labor hours in 2024 alone. This productivity stripped $1. 09 trillion from the global economy, a direct result of the rising temperatures that bot networks fight to normalize.

The “Robo-COP29” report identified over 1, 800 bots specifically deployed to amplify pro-fossil fuel messaging during UN negotiations. These automated accounts do not clutter social media feeds; they actively dilute the political required to implement the 0. 5% GDP transition plan. Every year of delay induced by these networks pushes the global economy closer to the 1. 3% cost threshold and the eventual 50% contraction scenario. The cost of this digital deception is no longer theoretical. It is a mounting debt, daily.

**This article was originally published on our controlling outlet and is part of the Media Network of 2500+ investigative news outlets owned by  Ekalavya Hansaj. It is shared here as part of our content syndication agreement.” The full list of all our brands can be checked here. You may be interested in reading further original investigations here

Request Partnership Information

About The Author
Dispur Today

Dispur Today

Part of the global news network of investigative outlets owned by global media baron Ekalavya Hansaj.

Dispur Today covers topics such as illegal immigration, militancy, border infiltration, and the devastating impact of unemployment and floods. Our investigative reporting delves into the complexities of citizenship issues, the opium trade, and the persistent challenges of poverty and migration.We also shine a light on the lack of proper education facilities, the scourge of forced labor, and the ongoing struggles with insurgency.