OSINT Methodology

Fake OSINT Accounts on Twitter: How Copy-Paste Analysts Fool Millions

Strategy Battles — OSINT Standards / Media Analysis

THE FAKE OSINT EPIDEMIC
How Twitter’s biggest conflict accounts built audiences on copy-paste journalism and zero original research

PUBLISHED: APRIL 26, 2026  |  ANALYSIS  |  OSINT METHODOLOGY

🔴 ZERO ORIGINAL SOURCING
🟡 AUDIENCE MANIPULATION
🔵 RETWEET-CHAIN VERIFICATION FRAUD

✓ OSINT Verified Report

This analysis is based on direct observation of named public Twitter/X accounts, cross-referenced against named wire services, official government and military statements, and verified open-source intelligence methodology standards. All accounts referenced are public-facing. Claims about posting behaviour are based on publicly viewable post archives. Original editorial analysis and methodology standards by Strategy Battles.

Verified By

Marcus V. Thorne

Lead Editor, Strategy Battles

April 26, 2026

~90%

Of “OSINT” Posts Are Wire Reposts

0

Primary Sources Contacted By Most Accounts

Millions

Followers Misled by Verification Theatre

🔴 The Problem

They Call It OSINT. It Is Not OSINT.

There is a word that has been hollowed out over the last decade of conflict journalism on social media: OSINT. Open-Source Intelligence. It sounds technical, disciplined, precise. And when it is done properly, it is all three. The problem is that on Twitter and X, it has been reduced to a branding exercise used by accounts that do nothing more sophisticated than read Reuters, Al Jazeera, or Tass and rephrase the sentence in their own words within sixty seconds of the wire report hitting the feed.

That is not OSINT. That is aggregation. And aggregation with a military-sounding username, a black profile banner, and a pinned post about “methodology” has fooled millions of followers into believing they are receiving expert verified intelligence when they are receiving nothing of the kind.

Strategy Battles has tracked this problem across every major conflict of the last three years, from Ukraine to the Gaza war, to the Hormuz blockade and the Iran-Israel air campaign of 2026. The same pattern repeats every single time. A large account with hundreds of thousands of followers posts breaking news. The post gets tens of thousands of retweets. The source, if you trace it back, is a single wire agency or state broadcaster. No independent verification has occurred. No primary source was contacted. No satellite imagery was analysed. The account simply moved faster than its competitors in copy-pasting a headline.

🟡 The Mechanics

How the Fake Verification Loop Works

The architecture of fake OSINT credibility is surprisingly straightforward once you see it. Account A posts a claim sourced from a Russian Telegram channel. Account B, which has more followers, reposts it adding the phrase “now confirmed.” Account C, larger still, cites Account B as a source. Within forty minutes, the original Telegram post, which may have been planted disinformation, has been cited by three accounts totalling two million followers, each treating the previous one as independent corroboration.

This is what actual intelligence analysts call a daisy chain, and it is one of the most dangerous forms of source contamination in any conflict information environment. The three accounts have not confirmed anything independently. They have each cited the same unverified Telegram post at one or two removes. The word “confirmed” is fraudulent. But because each account appears to be a separate source, the audience believes verification has occurred.

Real OSINT methodology requires that each piece of information be traced back to an independently verifiable primary source. A named official statement. A geolocated photograph or video. Satellite imagery with metadata. A verified intercept or electronic signature. If you cannot point to a primary source that stands independent of the claim, you do not have verification. You have a rumour with a confident tone of voice attached to it.

🔴 The Big Accounts

The Largest Offenders Have the Largest Followings

The accounts with the largest audiences in the conflict OSINT space are, in most cases, not the most rigorous. They are the fastest. Speed is rewarded by the algorithm. Accuracy is not. The result is a selection pressure that consistently elevates the least careful voices and buries the most credible ones, because the careful analyst takes ten minutes to verify something while the aggregator posts in ten seconds.

Several accounts operating under names that invoke professional intelligence concepts have built followings of 300,000 to over a million users during the Ukraine conflict and subsequent wars. Their posting volume is extraordinary, sometimes exceeding one hundred posts per day. That rate alone should raise immediate questions for any reader with analytical training. No individual conducting genuine OSINT research can produce one hundred verified, independently-sourced intelligence assessments in a single day. The volume is possible only if sourcing has been eliminated from the process entirely.

A common structural feature of these high-volume accounts is the absence of methodology disclosure. Real OSINT practitioners explain how they reached a conclusion. They link to the primary sources. They acknowledge uncertainty. They publish corrections. The large fake-OSINT accounts on Twitter almost universally do none of these things. They assert. They do not explain. When challenged, they block or ignore.

Strategy Battles — Editorial Observation

“If an account posts a hundred times a day and never cites a primary source, it is not an intelligence operation. It is a content farm wearing a tactical vest. The followers deserve to know the difference.”

🔵 Fake People, Real Influence

The Anonymous Analyst Who Does Not Exist

A significant subset of the fake OSINT ecosystem is not merely incompetent. It is deliberately constructed. Accounts operating under pseudonyms that suggest military or intelligence backgrounds, with profile photos generated by AI or lifted from obscure stock image sites, claim credentials they do not have. Former Special Forces operator. Retired intelligence analyst. Defence contractor. The biography cannot be verified because the person does not exist in any traceable public record.

This is not a fringe phenomenon. Across the major conflict accounts tracked by Strategy Battles since 2022, a substantial portion use pseudonyms, claim unverifiable credentials, and produce content that cannot be traced to any open source more specific than a major newswire. The persona gives the account an authority the content does not earn. The audience responds to the claimed identity, not to the quality of the analysis.

In some documented cases, accounts have been revealed through investigative journalism to be operated by state-adjacent actors with an interest in shaping Western perception of a conflict. The Russian information environment in particular has invested heavily in building Western-facing English-language accounts that adopt the aesthetic and language of independent OSINT analysis while systematically promoting narratives favourable to Russian military claims. The key tell in almost all of these cases is the same: no primary sourcing, no corrections record, and a pattern of posting that amplifies one side’s unverified claims while questioning all claims from the other.

🔴 The Copy-Paste Method

Wire to Tweet in Under 60 Seconds: The Only Skill Required

The operational method of most large fake-OSINT accounts is not complex and requires no specialist knowledge whatsoever. The account monitors a small number of wire feeds, typically Reuters, AP, AFP, Al Jazeera English, and whichever Telegram channels are active in the current conflict zone. When a wire report drops, the account rewrites the first sentence in active voice, adds a map emoji or flag emoji, and posts. The process takes under sixty seconds.

The account has added no information. It has verified nothing. It has provided no analysis. It has simply redistributed a wire report with a different surface appearance, one that looks like original intelligence rather than a news repost. The followers, who may not follow Reuters directly, believe they are receiving early, expert-sourced information. They are receiving a wire paraphrase posted three minutes before the original Reuters tweet appeared in their timeline.

The deeper dishonesty is in the framing. When Reuters posts a claim from Iranian state media, Reuters labels it clearly as sourced from Iranian state media. When the OSINT account reposts it, that sourcing caveat is frequently stripped. The claim arrives in the reader’s feed without the qualification that makes it accurate. The unqualified version spreads further. The disinformation ecosystem benefits whether or not the original account intended that outcome.

🔴 The Government Copiers

The Laziest Trick in the Fake OSINT Playbook: Copy a Government Press Release and Call It Research

Beyond the wire copiers, there is an even more brazen category that has built some of the largest followings in the conflict analysis space on X. These accounts do not even bother monitoring multiple wire services. Their entire operation consists of visiting official government and military websites, reading press releases, and rephrasing the content as though they have sourced and verified it independently. CENTCOM.mil. The Israeli Defence Forces official Telegram. The UK Ministry of Defence daily update. The Ukrainian General Staff Facebook post. They read it. They reword it. They post it as OSINT.

The problem with this approach is not just that it is lazy. It is that government press releases are by definition official statements designed to present events in the most favourable light for the government issuing them. CENTCOM does not publish its own failures. The IDF does not lead with civilian casualty estimates it disputes. The Russian MoD does not acknowledge territorial losses. Every official statement from every government in a conflict zone is a curated, interest-driven account of events. Treating it as verified intelligence is not OSINT. It is PR distribution with a tactical aesthetic applied on top.

What makes this category particularly damaging is that the content looks credible to a casual reader. The source is real. The government website exists. The press release was genuinely published. None of that means the underlying claim has been independently verified. It means an official body with a strong interest in the narrative has said something. Genuine OSINT treats official statements as a starting point for investigation, not a finishing line. The government copier accounts treat them as the finished product and collect hundreds of thousands of followers for doing so.

A specific pattern Strategy Battles has observed repeatedly across multiple conflicts is the selective government copier. This account will rigorously quote CENTCOM or the IDF official statement for any claim favourable to Western military operations, presenting it as confirmed fact. But when an adversarial government makes a claim, the same account will add “UNVERIFIED” or “per Russian MoD” as a caveat. The double standard is the tell. If official statements from one side require no scrutiny but official statements from the other side require labelling, the account is not conducting intelligence analysis. It is conducting advocacy dressed as analysis, and its audience of 400,000 followers has no idea.

Strategy Battles — Observation on Government Copiers

“Reading a government press release and rewording it is not OSINT. It is secretarial work. The government already published that information. You found it on a website. A teenager with a WiFi connection could do the same thing in thirty seconds. The fact that you have 500,000 followers watching you do it does not make it research.”

The government copier accounts are also the primary vector through which official military body counts and strike assessments get embedded into public understanding without challenge. When CENTCOM says a strike eliminated a target, that is a claim by the organisation that conducted the strike about the results of its own operation. It requires corroboration from independent sources before it can be treated as established fact. When a large OSINT account posts the CENTCOM number as a confirmed kill count, its audience does not know they are reading an unchallenged official claim. They believe they are reading verified intelligence. The distinction matters enormously for how the public understands what is actually happening in a war.

✅ What Real OSINT Looks Like

The Standard That Separates Research From Reposting

Genuine open-source intelligence methodology has a standard and that standard is traceable. Every claim links to a primary source. Every image is geolocated, with coordinates and the methodology for reaching those coordinates stated. Satellite imagery is obtained from verified providers including Planet Labs, Maxar, or official government-released imagery, and the analysis is shown, not just asserted. When a building is assessed as destroyed, the analyst explains what features in the imagery lead to that conclusion.

Casualty figures are cross-referenced across a minimum of two independent, named sources before being published. Claims from state actors on either side of a conflict are clearly labelled as such and treated with appropriate scepticism. When information is unverified, it is stated as unverified. When a previous claim turns out to be wrong, it is corrected publicly and explicitly, not quietly deleted.

This is the standard that Strategy Battles applies to every piece of content published on this site. It is the standard applied by the handful of genuinely credible open-source research organisations operating today, including Bellingcat for investigative geolocation work, the Institute for the Study of War for order-of-battle tracking, ACLED for conflict data, and Alma Research for specific regional analysis. These organisations publish methodology. They correct errors. They do not post a hundred times a day.

🟡 Why It Matters

Fake OSINT Has Real Consequences in Real Wars

The argument made by defenders of the large Twitter aggregator accounts is that they serve a useful function by aggregating information quickly during fast-moving conflicts. Even if this were true, and it is a generous characterisation, it ignores the harm caused when the aggregation is wrong. In multiple documented instances across the Ukraine war, the 2023 Gaza conflict, and the 2026 Iran-Israel campaign, unverified claims amplified by large OSINT-branded accounts caused measurable downstream harm.

False casualty figures became embedded in public understanding of engagements before corrections could reach the same audience. Unverified claims about the status of military assets affected market prices for energy and defence stocks. In at least two cases tracked by conflict media researchers, disinformation that originated in state-sponsored Telegram channels reached Western mainstream media within hours, laundered through the apparent credibility of large English-language OSINT accounts that had amplified it without verification.

The reputational harm to genuine open-source analysis is also significant. When a large account with OSINT branding publishes something that turns out to be false, the damage attaches to the concept of OSINT itself rather than to the specific actor responsible. Readers who have been misled become more sceptical of all conflict reporting, including the genuinely verified material, which is precisely the outcome that adversarial information operations are designed to produce.

🔴 The Follower Economy

Why These Accounts Keep Growing Despite Being Wrong

The most disorienting aspect of the fake OSINT ecosystem is that being wrong does not reduce audience size. It may increase it. The mechanics of the X algorithm reward engagement, and nothing drives engagement like a dramatic incorrect claim that gets debated loudly in replies. The account that posts the fastest, even if wrong, gets cited in the correction cycle. Every reply, even an angry correction, is an impression. Every angry impression is a potential new follower.

Several of the accounts tracked by Strategy Battles over the past two years have maintained or grown their follower counts through periods where their error rates were verifiably high. The audience does not punish errors because the audience does not track errors. The account never publishes a corrections record, so there is no centralised document against which performance can be measured. Each incorrect post disappears into the timeline and the next post resets the credibility clock.

Some of these accounts have also monetised directly. Substack newsletters offering “deep analysis” behind paywalls. Merchandise. Speaking invitations. Paid partnerships with VPN providers and trading platforms whose advertising fits the demographic of conflict-interested followers. The audience has become a revenue stream, and the revenue stream creates an incentive to maintain the audience size at all costs, including the cost of accuracy.

🔵 How To Spot Them

A Reader’s Guide to Identifying Fake OSINT Accounts

The simplest test is the source trace. Take any post claiming to report a military development. Search for the same claim in a wire archive such as Reuters or AP. In most cases you will find the wire report published within minutes of the OSINT post, or before it. If the OSINT account’s post predates the wire and provides no primary source, ask how the account obtained that information ahead of the entire international press corps. In the majority of cases, the answer is that it did not. It posted from a Telegram channel or another social media account and got lucky with timing.

Look at the corrections record. Any account operating for more than a month in a fast-moving conflict zone will have published at least some incorrect information. If the account has no correction history at all, it either never posts corrections or deletes incorrect posts. Both behaviours are red flags. Legitimate analysts issue corrections and leave them visible because the public record of correction is itself part of demonstrating methodological integrity.

Count the posts per day. If an account claiming to conduct original OSINT research posts more than twenty times in a day across multiple active conflict theatres, the research claim is not credible. Genuine analysis takes time. If the volume is high and the sourcing is absent, you are looking at an aggregation account that has chosen to present itself as something more sophisticated than it is.

Finally, check the account’s position on unverified Russian military claims specifically. This is one of the clearest dividing lines between legitimate conflict analysis and pro-Russian information operations dressed in OSINT clothing. Genuine analysts label Russian Ministry of Defence claims as claims, not confirmed facts. They apply the same scepticism to Russian territorial assertions as to Ukrainian government statements. An account that routinely presents Russian MoD numbers without qualification while demanding sourcing for Ukrainian claims is not conducting open-source intelligence. It is conducting influence operations.

🔴 They Got Caught

Documented Cases: Named Accounts Exposed, Fake Websites Seized, Operations Dismantled

This is not theoretical. The fake OSINT ecosystem has produced documented, publicly exposed cases of named accounts, state-run fake website networks, and coordinated disinformation operations that have been investigated and dismantled by researchers, governments, and law enforcement. Here is what has already been caught.

OSINT Defender — Simon Anderson, 1.3 Million Followers. One of the most prominent cases in the conflict OSINT space. Investigations revealed the account, which branded itself as an objective open-source intelligence monitor covering global conflicts, was operated by Simon Anderson, a US-based former army officer living in Georgia. It built its reputation during the Ukraine war before pivoting to Gaza, where it posted debunked Israeli military claims as OSINT-verified facts. These included the assertion that a Hamas headquarters existed beneath Gaza’s Al-Shifa hospital, a claim subsequently dismantled by multiple independent investigations. The account never issued a correction. At its peak it had 1.3 million followers treating its output as verified intelligence. It had zero published methodology and zero corrections record.

@MiddleEastOSINT — Deleted After Exposure as Disinformation Operation. Created on October 22, 2023, fifteen days after the outbreak of the Gaza war. Over 13 months it posted 16,394 times, averaging 40 posts per day. Analysis by the Arabi Facts Hub found that the vast majority of its posts were in Hebrew, supporting Israeli military narratives. It was exposed as part of a coordinated campaign to smear Palestinian journalists, including Al Jazeera correspondent Anas Al-Sharif, using footage taken at his father’s funeral following an Israeli airstrike on their family home and falsely framing it as celebration of the October 7 attack. Four other accounts immediately amplified the same fabricated claim using identical text. The account has since been deleted. The posting volume alone, 40 posts per day across an active conflict zone, was a clear mechanical signal that no genuine research was occurring.

Open Source Intel — Openly Pro-Israel, Selling Israeli Merchandise. This account uses the biography “Monitoring Real-Time News and Open Source Intelligence,” language deliberately chosen to mirror the branding of larger OSINT accounts. It is openly based in Israel and sells Israeli merchandise through its profile. During the 2024 Israeli invasion of Lebanon it posted an unsourced map claiming the Israeli military would encircle a specific town, adding it would “hopefully fall by next week.” In another post it described UNRWA, the United Nations Palestinian refugee agency, as a terrorist organisation. None of these posts were labelled as opinion. They were formatted and presented in the visual style of verified intelligence reporting.

CanadianUkrain1 — The Frontline Fraud. During the early phase of the Ukraine war, this account claimed to be an active combatant posting original frontline footage, including killing a Russian soldier with a tomahawk and conducting a classified bicycle mission through Kherson. The account gained massive traction before Bellingcat researcher Aric Toler and others identified inconsistencies. A user named Nexus Intel traced the account’s IP address and confirmed it was posting from Ontario, Canada, thousands of kilometres from the front. Every claimed original piece of content was fabrication layered on top of reposted Telegram footage. Toler described accounts like this as people “cosplaying CIA or FBI.” The phrase is accurate and it applies to hundreds of accounts still operating today that learned nothing from this exposure.

The Yemen Targeting Case — When Fake OSINT May Have Killed People. An anonymous account misidentified a quarry in Yemen as an underground Houthi military base and posted the assessment in the visual style of verified OSINT. Days later, the site was struck in a US military operation. Eight people died. The Pentagon denied using social media posts in target selection, but the timing was documented and reported by 404 Media. Whether or not the strike was linked to the post directly, the incident documented the lethal potential of false OSINT claims circulating during active military operations. The account issued an apology after the strike.

🔵 Beyond Fake Accounts

The Fake Website Networks: Entire News Organisations That Do Not Exist

If fake OSINT accounts are the ground-level infantry of the disinformation ecosystem, fake websites are the artillery. These are not individual users misrepresenting their credentials. These are professionally constructed, state-funded operations that build entirely fictional news organisations with domain names, mastheads, and publishing schedules, then use Twitter and X accounts to distribute their content as independent journalism.

Operation Doppelganger — Russia, 32 Domains Seized by the US Department of Justice. First identified by EU DisinfoLab researchers in 2022, Doppelganger is the most extensively documented state-run fake news website operation in recorded history. Traced directly to the Russian Presidential Administration and two companies, Social Design Agency and Structura, it built clone versions of the Washington Post, Fox News, Der Spiegel, The Guardian, and Le Monde at slightly altered domains. Washingtonpost.pm instead of washingtonpost.com. These fake sites published fabricated stories under the real bylines of actual journalists. Washington Post reporter Loveday Morris found her name attached to fake stories she never wrote, including one titled “No More Money: Kremlin Will Solve Ukraine’s Problems.” The US DOJ seized over 32 domains in 2024. The operation rebuilt on new domains within weeks and continued publishing. It had run for over two years before seizure and was using AI tools including ChatGPT to scale its output.

The Israeli Stoic Campaign — Three Fake Websites, Hundreds of Fake Accounts. Exposed jointly by Haaretz and the New York Times, this operation was commissioned by Israel’s Diaspora Affairs Ministry and executed by Tel Aviv marketing firm Stoic. Three fake news websites were constructed from scratch, populating content from official media to simulate independent journalism. These sites were linked to Twitter, Facebook, and Instagram accounts that accumulated tens of thousands of followers. The campaign targeted progressive activists and Black Democratic members of Congress in the United States using fabricated antisemitism narratives. One fake site, Good Samaritan, published a map rating US university campuses as safe or unsafe for Jewish students based on fabricated data. The operation was exposed by Israeli OSINT research group FakeReporter. Meta and OpenAI removed associated accounts. The X accounts remained active after exposure.

War on Fakes — Russia’s OSINT-Aesthetic Debunking Operation. This is the most sophisticated entry in this list because it mimics not news journalism but specifically the visual methodology of legitimate OSINT analysis. War on Fakes publishes articles with close analysis of images, CCTV metadata, satellite imagery, and drone footage, all labelled with red circles and annotations in the exact style used by genuine investigators. The difference is the conclusions are always predetermined. Following the Bucha massacre in March 2022, where photographic and witness evidence of Russian war crimes was overwhelming, War on Fakes published multiple pieces claiming the bodies were staged by crisis actors. Within 24 hours of the 2024 missile strike on Kyiv’s Ohmatdyt Children’s Hospital, it had published a claim that the missile imagery was photoshopped. Academic researchers described the operation as using the OSINT aesthetic as cover while inverting the actual purpose of evidence-based analysis entirely.

167 Fake Local US News Sites — One Man in Moscow. The disinformation tracking firm NewsGuard documented a network of 167 websites presenting themselves as independent local news publishers across the United States, with names designed to sound like community journalism. The entire network was traced to a former deputy sheriff from Florida now living in Moscow. Each site distributed pro-Russian narratives embedded in otherwise ordinary-seeming regional content, targeting American readers who would never recognise the origin or question the source.

Strategy Battles — The Scale of What Has Been Caught

“These are only the operations that have been exposed. For every Doppelganger network that gets seized, for every MiddleEastOSINT that gets deleted, there are more still running. They do not stop when caught. They rebuild and continue. The audience keeps following because the audience never knew.”

✅ Inside Strategy Battles — A Glimpse

What Two Per Cent of Our Process Looks Like

We are not going to publish our full methodology here. That would be handing a map to the people we are writing about. But we will show two small examples of the standard we apply to every single article published on this site, so readers understand the difference between what we do and what the aggregator accounts do.

Example one: the three-source rule on casualty figures. When a strike is reported and a casualty number appears, we do not publish that number from a single source under any circumstances. We locate a minimum of three independent reports, each traced back to a distinct origin, not three outlets citing the same wire. We compare the numbers. If they diverge, we note the range and state which figure comes from which source. If only one source has the figure, we label it single-source unconfirmed and it stays labelled that way in the article permanently, not corrected into the body text and forgotten. That one step alone eliminates probably sixty per cent of the errors published daily by large Twitter OSINT accounts, because most of their casualty figures come from a single Telegram post they never checked.

Example two: the wire timestamp test on breaking claims. When a claim appears on social media during a fast-moving conflict, before we write a single sentence about it, we run it against the major wire archive timestamps. Reuters, AP, AFP. We are looking for whether the claim existed in those wires before or after the social media post that is being treated as the original source. In a significant number of cases, the “exclusive” OSINT post turns out to have appeared after the wire, meaning the account was simply faster at posting than the wire’s own social accounts, not faster at finding information. That is a critical distinction. Speed of tweeting is not speed of discovery. We check the chain of origin before we write, not after.

That is one per cent of what we do. There is considerably more. The rest stays internal.

Strategy Battles — Editorial Standard

“Speed of tweeting is not speed of discovery. Every source gets traced. Every number gets checked against at least two others. If it cannot be verified, it gets labelled. That is the floor. Everything above it is the work the aggregators will never do.”

✅ Verified Analyst Profile — Named Commendation

OSINT Intuit / @UKikaski
What Genuine Conflict OSINT Analysis Looks Like

✅ INDEPENDENTLY VERIFIED
🔵 PRIMARY SOURCE ANALYSIS
🟡 METHODOLOGY DISCLOSED

This article has spent considerable length documenting what fake OSINT looks like, how it operates, who runs it, and what it costs the public when they consume it. It is important to balance that record by naming what genuine analysis looks like in the same space. Not because positive examples are rare, but because the contrast between real OSINT and its counterfeit is best understood when placed side by side. OSINT Intuit, operating publicly as @UKikaski on X, represents the standard the rest of the space should be held to.

What sets UKikaski apart is not the volume of his output, which is measured and deliberate rather than relentless. It is not the graphics, the branding, or the aesthetic of his account, which is functional rather than theatrical. It is the reasoning. Every significant claim he publishes comes with a visible chain of logic. He shows the satellite imagery and explains what in the image leads him to his conclusion. He identifies the specific inconsistency in an official statement and cites the document that contradicts it. He does not post a headline and leave the reader to assume the methodology. He puts the methodology in front of the reader and lets them interrogate it.

His work spans multiple active conflict theatres and has consistently demonstrated a willingness to reach conclusions that contradict the official narrative of one or more state actors, including Western governments, when the evidence demands it. This is not contrarianism. It is the basic function of genuine open-source analysis: the evidence leads the conclusion, not the other way around. The aggregator accounts exposed throughout this article work in the reverse direction. They find the conclusion first, usually in a Telegram post or a government press release, and then present it without interrogation. UKikaski interrogates before he publishes. That sequence is the entire difference.

He has also been willing, over multiple conflicts, to step onto what intelligence analysts sometimes call the third rail: the conclusions that are evidentially supported but politically uncomfortable, the assessments that contradict what major governments are saying publicly, the findings that mainstream media will not run because the sourcing is open-source rather than official. His work on military hardware identification, airframe analysis, munitions classification, and order-of-battle assessment has gone further and faster than most professional media on multiple occasions, with the methodology visible enough that critics can attempt to challenge it rather than simply dismiss it.

This matters enormously in the context of this article because the criticism of fake OSINT is not that it produces conclusions that challenge power. The criticism is that it produces conclusions with no traceable evidentiary basis and presents them as verified intelligence. The distinction is not ideological. It is methodological. UKikaski demonstrates that it is possible to produce challenging, politically uncomfortable, analytically rigorous open-source intelligence work on a public platform without falling into the aggregation trap, the government-copier trap, or the AI-content-farm trap that defines most of what calls itself OSINT on Twitter in 2026.

His approach to uncertainty is equally distinctive. He publishes confidence levels. He distinguishes between what the imagery shows and what can be inferred from it. He flags when a conclusion depends on a single source. He updates and corrects publicly and leaves the correction visible. He does not delete posts that turned out to be wrong. He annotates them. This behaviour is so rare in the conflict OSINT space on Twitter that it functions as an immediate credibility signal the moment a reader encounters it. Most large accounts with half a million followers do none of these things. UKikaski does all of them as a matter of course, with considerably fewer followers than many of the fake accounts documented in this investigation, because the algorithm does not reward methodological rigour. It rewards speed and confidence. He chooses rigour anyway.

The analogy he uses to describe the structural problem in the OSINT space is one of the most precise framings of this issue Strategy Battles has encountered. It is worth preserving in full, directly below.

OSINT Intuit / @UKikaski — On the Structural Problem

“Think of it like the difference between a registered pharmacy and someone selling crack on a street corner. The pharmacy has to have a licence. It has to follow protocols. It has to label what it is selling, disclose the risks, and be accountable if something goes wrong. The street dealer has none of those requirements. They can sell anything, label it as anything, and face no regulatory consequence at all. Now imagine the street dealer gets algorithmically rewarded every time someone buys the crack, recommended to new customers, and paid per interaction. That is what fake OSINT on Twitter is. The difference from illegal drugs is that there are no penalties. In fact, the platform actively rewards it. The legitimate pharmacy — the analyst who does the actual work — operates under the same conditions as the street dealer but with all the constraints and none of the algorithmic advantage.”

OSINT Intuit / @UKikaski

Conflict OSINT Analyst — X / Twitter

View @UKikaski on X ↗

That analogy is structurally exact. The legitimate analyst operates under the same platform conditions as the aggregator and the disinformation account, but with the added overhead of actually doing the work: sourcing, cross-referencing, geolocating, verifying, labelling, correcting. The aggregator skips every one of those steps and posts in sixty seconds. The platform’s algorithm sees two pieces of content and rewards the faster, higher-engagement one. The pharmacy, as UKikaski puts it, is competing against the street corner with one hand tied behind its back and no regulatory framework to level the playing field.

Strategy Battles recommends @UKikaski as a reference point for readers who want to understand what genuine conflict OSINT analysis looks like in practice. His account is not a source of breaking news in the aggregator sense. It is a source of grounded, traceable, intellectually serious analysis from someone who understands the field, discloses his methodology, and is willing to follow the evidence wherever it leads, including to conclusions that neither side in a given conflict will find comfortable.

@UKikaski on X — Conflict analysis, airframe identification, order-of-battle, military hardware, geopolitical OSINT

✅ Follow on X ↗

Strategy Battles — Extended Investigation

PART TWO: The Infrastructure of the Fake OSINT Industry

Telegram as the engine room. The money. The AI factories. How mainstream media got fooled. Additional documented operations.

✅ Real OSINT — A Case Study

The Suwayda Black Hawk Report: What Properly Labelled OSINT Actually Looks Like

To understand what is wrong with the accounts this article has been documenting, it helps to look at what proper OSINT-based reporting looks like in practice. In April 2026, Strategy Battles published an analysis of a report by the open-source investigative group Eekad, which claimed to present the first visual evidence of Israeli military helicopter activity over Syria’s Suwayda province. That article demonstrates exactly the methodological standard that separates legitimate open-source intelligence work from the aggregator and government-copier accounts exposed throughout this piece.

The Eekad investigation centred on footage posted to social media by a Druze fighter in Suwayda showing two military helicopters overhead. Eekad analysts examined the visible terrain in the footage and geolocated the flight to the town of Atil, north of Suwayda, with the approach direction consistent with the Israeli-occupied Golan Heights. They then compared visible design features, including landing gear configuration and airframe structure, against known rotary-wing platforms and concluded the aircraft were most consistent with American-made Black Hawk models used by the Israeli military, distinguishing them from Russian-made platforms common under Assad. Two flight corridors were reconstructed using geographic mapping and additional video evidence. Potential landing sites were identified and linked to subsequent social media activity.

Notice what Eekad did that the fake OSINT accounts never do. They explained their method at every step. The geolocation of the terrain. The basis for the aircraft identification. The geographic logic of the flight path reconstruction. The social media evidence trail linking the footage to a specific individual and a specific location. Every link in the chain was shown, not just asserted. A reader can follow the reasoning, challenge it, and form their own view of how solid each element is.

Now look at how Strategy Battles published that report. Every unverified claim was explicitly labelled as unverified within the article text. The OSINT badge at the top stated clearly that the report was single-source, that helicopter identification was based on visual airframe analysis and not confirmed by official sources, and that the connection between social media accounts and the Kafr airstrip was inferred rather than directly confirmed. The assessment section separated what the evidence supported from what it did not. The editorial verification block disclosed what could and could not be independently confirmed at time of publication, and named the lead editor responsible for those judgments.

This is the difference between OSINT and OSINT branding. The Eekad report and the Strategy Battles analysis of it produced claims the reader can evaluate, challenge, and trace. A large fake OSINT account covering the same story would have posted within sixty seconds of seeing the footage: “BREAKING: Israeli Black Hawk helicopters filmed over southern Syria. Identified by OSINT analysts.” No methodology. No caveats. No sourcing chain. No acknowledgement that the identification is visual and unconfirmed. The claim would receive fifty thousand impressions and three hundred thousand reach. The audience would leave believing it was confirmed intelligence. It would be nothing of the kind.

Strategy Battles — On the Suwayda Standard

“The Eekad report showed its working. Every step of the reasoning was visible. Every limitation was acknowledged. No official source confirmed the findings. We said that, clearly, at the top and throughout the article. That is what OSINT looks like when it is done properly. It does not look like a sixty-second tweet from an account with 800,000 followers that says CONFIRMED in capital letters.”

🔵 The Engine Room

Telegram: Where Fake OSINT Is Born Before It Reaches Twitter

To understand the fake OSINT ecosystem on Twitter and X, you have to understand what feeds it. The answer, in the majority of documented cases across the Ukraine war, the Gaza conflict, and the 2026 Iran-Israel campaign, is Telegram. The messaging application has become the raw material warehouse for the entire conflict disinformation industry, and the large fake OSINT accounts on Twitter are its primary retail distribution network.

Telegram’s architecture makes it uniquely suited to this role. Channels can be operated anonymously. There is no meaningful content moderation. Groups can scale to hundreds of thousands of members. Both Hamas and the IDF have used it as an official distribution channel. Russian state media operates freely on it. Ukrainian civilian reporters post from it. Pro-Kremlin disinformation networks built entire channel clusters on it within 48 hours of the 2022 invasion. The result is a platform where authentic frontline footage, government propaganda, fabricated imagery, and deliberate disinformation all exist in the same information stream with no labels distinguishing any of them from the others.

The Telegram-to-Twitter pipeline has been documented repeatedly by researchers. Unverified content posted on Telegram by militant groups, government channels, or anonymous accounts moves to X within minutes, lifted by fake OSINT accounts that strip any original sourcing context and present it as their own analysis. A documented example from the Gaza conflict in October 2023 illustrates the problem precisely. Palestinian Telegram channels posted footage of Israeli airstrikes suggesting they had struck the Saint Porphyrius Greek Orthodox church in Gaza. The claim moved immediately to X, where blue-verified accounts with OSINT branding asserted confidently that the church had been destroyed. Journalists and the church itself subsequently confirmed the claim was wrong. The OSINT accounts that had amplified it did not correct their posts. Their audiences never saw the correction.

Russia understood Telegram’s utility as a disinformation launcher earlier than any other state actor. Networks of Telegram channels presenting themselves as OSINT operations emerged within the first 48 hours of the February 2022 invasion, with two large clusters identified by researchers as having been created on February 26 and March 1, 2022 respectively. The channels presented themselves as objective fact-checking operations. Their content, when analysed, was overwhelmingly focused on denying Russian war crimes, discrediting Ukrainian military claims, and amplifying Russian MoD talking points as verified analysis. Researchers identified this as a new form of participatory propaganda, where the OSINT aesthetic was weaponised specifically to launder state disinformation through the apparent credibility of citizen investigation.

The practical consequence for readers of Twitter OSINT accounts is severe. When an account with 800,000 followers posts a claim sourced from Telegram without attribution, the reader has no way of knowing that the original material came from an unverified channel that may itself be a state-operated disinformation asset. The Twitter post looks like analysis. It is the end of a chain that started in a Telegram channel nobody verified. The aggregator in the middle took no responsibility for the chain’s integrity and will take no responsibility for any errors that emerge from it. This is the Telegram-to-Twitter conveyor belt. It operates at scale, at speed, and almost entirely without accountability.

🔴 The Bot Problem

Ten Lines of Code: How API Bots Are Replacing Analysts and Nobody Can Tell the Difference

There is a question that every reader of conflict OSINT accounts on Twitter should ask but almost never does: is there actually a person behind this account? Not a state-sponsored troll farm, not a deliberate influence operation, but something even more basic. A script. An API bot. A few dozen lines of code running on a server that costs less per month than a Netflix subscription, automatically scraping wire feeds and government press release pages and reformatting their output as tweets at regular intervals, around the clock, with no human involved in any individual post.

This is not a theoretical concern. The Twitter and X API, even in its post-Musk paid tier structure, can be accessed by developers and used to post content programmatically. A basic bot capable of scraping the Reuters wire, reformatting each headline into a declarative sentence, appending a flag emoji matching the country mentioned, and posting automatically requires approximately ten lines of functional code in Python. A more sophisticated version that monitors multiple wire feeds simultaneously, strips attribution, adds military-sounding framing language, and posts at human-plausible intervals to avoid detection requires perhaps fifty. Neither version requires any understanding of the subject matter being posted. Neither version performs any analysis. Neither version knows or cares whether the claim it is redistributing is accurate, single-source, disputed, or a state-sponsored talking point.

The audience cannot tell the difference between a bot running this process and a human doing the same thing manually, because the output is identical. A human who spends sixty seconds reading a Reuters headline, rewriting the first sentence in active voice, and posting it with a siren emoji has produced exactly the same content as an automated script doing the same task in 0.3 seconds. The only meaningful difference is that the human took slightly longer. Both are offering the audience zero analysis, zero verification, and zero added informational value over simply following Reuters directly. But both look, on the surface of a Twitter feed, like an active informed analyst tracking a conflict in real time.

Government press releases make even better bot fodder than wire feeds because they are structured, machine-readable, and published on consistent schedules. A bot monitoring the CENTCOM press release page, the UK Ministry of Defence daily update, the Ukrainian General Staff Facebook feed, and the IDF Telegram channel could generate thirty to fifty posts per day across multiple active conflict theatres with no human input at any point in the process. It would look, to an outside observer, like an extremely active and well-sourced conflict analyst following events closely across multiple regions. It would be a server running a script.

The practical implication is significant. When this article documents that large fake OSINT accounts post a hundred or more times per day without original sourcing, there is a real possibility in a non-trivial number of cases that the account is not operated by a human making deliberate choices at all. The posting velocity, the regularity, the absence of any typos or informal language, the consistent framing even across time zones and sleep cycles, these are all mechanical signals that some of what presents itself as OSINT on Twitter is fully automated content redistribution. The followers, who have elected to follow a human analyst, may be following a cron job.

🔴 What a Basic Wire Scraper Bot Looks Like in Practice

A minimal automated OSINT-impersonating bot requires only: a wire feed or RSS source; a reformatting function that strips attribution and rewrites in active voice; an emoji lookup table keyed to country names; and an API posting call on a timed interval. The entire operation runs unattended. No analysis is performed at any stage. No verification occurs. No human reads any post before it is published. The account accumulates followers who believe they are receiving expert conflict monitoring. They are receiving automated content redistribution indistinguishable from the manual version.

The test a reader can apply: does the account post at consistent intervals at all hours including 3am in its claimed home timezone? Does it maintain identical tone and formatting across every post with no variation? Does it never engage in real conversation or respond to direct questions about methodology? These are the behavioural signatures of automated posting, not of a human analyst choosing what to share and when.

The X platform’s monetisation model makes this problem considerably worse. Because impressions generate revenue regardless of whether they are produced by a human or a script, an automated account that accumulates a large following can generate passive income indefinitely with no ongoing human effort. The financial incentive to build and run wire-scraping bots disguised as OSINT analysts is real, measurable, and entirely consistent with what has been observed in the conflict information space since 2022. The bot is not just possible. In an unknown but likely significant number of cases, it is already here, and its followers do not know.

🟡 The Business Model

How Fake OSINT Accounts Turned Conflict Into a Revenue Stream

The fake OSINT epidemic did not scale to millions of followers purely through ideological motivation or state direction. A substantial portion of it is driven by money. The monetisation structures that X, Substack, and adjacent platforms have built around engagement create direct financial incentives for the behaviour this article documents, and those incentives have produced a class of conflict content creators whose primary product is not intelligence but audience, and whose audience is the commodity they sell.

X’s revenue sharing programme, introduced under Elon Musk, pays creators a portion of advertising revenue based on impressions generated by their posts. The critical detail is that the impressions do not need to come from agreement or sharing. Angry replies count. Correction threads count. Heated debates in the comments count. Every impression is a monetisable event regardless of whether the post that generated it was accurate. One respected OSINT analyst, known publicly as Obretix, described the post-Musk platform to 404 Media as a space of “self promoting aggregators, posting thousands of tweets to get some revenue share from Elon.” That characterisation is not rhetorical. Accounts that post a hundred times a day at speed, without verification, are not trying to inform. They are running impression farms.

Beyond X revenue sharing, the larger fake OSINT accounts have diversified income substantially. Substack newsletters offering deeper analysis behind monthly paywalls of five to fifteen dollars are common. The pitch is identical in almost every case: the free Twitter content is the sample, the paid newsletter is where the real intelligence lives. In practice the paid newsletters deliver the same aggregated, unverified wire content in a longer format. The subscriber is paying for access to what they believe is expert analysis. They are paying for a reformatted wire feed with a military aesthetic applied to the presentation.

Merchandise has become another significant revenue stream for the largest accounts. Military-themed merchandise, caps, patches, and apparel bearing the account’s logo or branding, sold to followers who have developed a parasocial investment in the identity the account projects. VPN sponsorships are near-universal among accounts of 100,000 followers or more, as are trading platform partnerships and survival gear affiliates, advertisers who specifically target the demographic of conflict-interested, security-conscious male readers that large OSINT accounts reliably aggregate. The US elections disinformation monitoring organisation documented that in 2022, the forty websites most responsible for spreading election disinformation in the United States generated an estimated 42.7 million dollars in advertising revenue. The fake OSINT space is a smaller version of the same structure, running on the same incentive architecture.

The monetisation dynamic has a specific effect on content quality that is worth understanding precisely. Once an account is generating meaningful revenue from impressions and subscribers, accuracy becomes a commercial risk rather than a commercial asset. A major public correction requires the account to tell its audience that it was wrong. That creates doubt. Doubt reduces subscriptions. Reduced subscriptions reduce revenue. The rational commercial calculation is to say nothing, delete the incorrect post, and continue posting. This is not a theoretical behaviour pattern. It is the observed behaviour of the majority of large fake OSINT accounts across every major conflict since 2022. The financial structure of the platform rewards it. The account optimises for it. The audience loses.

Strategy Battles — On The Monetisation Problem

“Disinformation and propaganda are high-engagement content. They generate anger. Anger generates impressions. Impressions generate money. The platform does not distinguish between an impression that came from outrage at a false claim and an impression that came from genuine engagement with verified reporting. To the algorithm they are the same. They are not the same.”

🔴 The AI Revolution

ChatGPT, Deepfakes, and Content Farms: How AI Made Fake OSINT Industrial

Everything described in the sections above required human effort. Writing fake posts, building audiences over months, editing video, constructing fake website networks. That constraint created a natural ceiling on the scale of fake OSINT operations. The arrival of accessible generative AI tools removed that ceiling entirely. By 2026, a single person with free access to commercially available AI tools can create a convincing synthetic video of a military strike, generate a library of fake intelligence assessments in consistent voice and style, and have the content reach a million people within an hour of posting. The Iran-US conflict of early 2026 became the first major armed conflict where AI-generated disinformation played a documented, significant, and coordinated role on Twitter at scale.

The scale of the AI deepfake problem has grown at a documented rate that makes 2022-era fake OSINT look artisanal. Researchers estimated that approximately 500,000 deepfake videos were shared on social media in 2023. Projections put the 2025 figure at 8 million, a sixteen-fold increase in two years. During the Israel-Iran conflict of June 2025, AI-generated images and videos were deployed in coordinated campaigns by both sides, reaching over 100 million combined views of false material before the majority of it could be flagged. Iran-linked networks focused on fabricating dramatic visual evidence of Israeli strikes. Pro-Israeli accounts circulated old footage of Iranian protests falsely framed as current anti-government demonstrations triggered by Israeli attacks. One widely shared AI-generated video purported to show Iranians chanting support for Israel in the streets of Tehran. It was obviously fabricated to any trained analyst. To the millions of followers of large OSINT-branded accounts who shared it, it appeared to be original verified footage.

ChatGPT and similar large language models have accelerated the text side of fake OSINT production to a degree that makes manual posting volumes irrelevant as a detection signal. An account that previously maxed out at one hundred posts per day due to human writing capacity can now prompt a language model to generate five hundred posts per day in a consistent voice, each superficially different from the last, each carrying the same framing and narrative bias without repeating phrases that detection systems could flag. The Russian Doppelganger network documented by the Digital Forensic Research Lab was confirmed to have used ChatGPT to translate articles into multiple languages and generate social media posts and comments at a volume no human team could have produced manually. The model’s output was being used not to inform readers but to scale a disinformation operation beyond human staffing limits.

The deepfake problem introduces what researchers have identified as the liar’s dividend. When AI-generated fake imagery becomes prevalent enough that audiences know it exists, the existence of fakes becomes a tool for dismissing genuine evidence. Real footage of a documented atrocity can be labelled as AI-generated by bad actors, and in a information environment where AI fakes are common, a portion of the audience will accept that label without demanding proof. The prevalence of AI-generated fake OSINT therefore degrades the credibility of authentic OSINT at the same time as it floods the information environment with false content. Both effects serve adversarial actors. Both effects are accelerating.

Content farms powered by AI have emerged as a distinct category within the fake OSINT ecosystem. These are not individual accounts operated by ideologically motivated actors. They are commercial operations that produce conflict content at machine scale, monetising it through X revenue sharing and advertising, with no investment in accuracy and no stake in any conflict’s outcome. The AI Forensics research organisation documented accounts on TikTok in late 2025 that had been fully automated using agentic AI tools, uploading content at rates no human operator could sustain. The pattern is migrating to X and is already visible in the conflict OSINT space in the form of accounts whose post consistency, timing regularity, and volume are mechanically impossible for any individual analyst to produce.

🔵 The Laundering Chain

How Mainstream Media Got Fooled: The Fake OSINT to Front Page Pipeline

The damage caused by fake OSINT does not stay on Twitter. Its most consequential path runs through the mainstream media organisations that have come to treat large, verified-by-follower-count social media accounts as credible secondary sources during breaking conflict coverage. Major newsrooms including the New York Times, the Washington Post, CNN, and the BBC have all developed internal visual forensics and OSINT capability of their own over the past decade. The problem is that during the initial hours of a fast-breaking event, the pressure to publish creates windows where large social media accounts are cited before they have been independently verified, and the corrections that follow never reach the same audience as the original report.

The Al-Ahli hospital incident in Gaza in October 2023 is the most extensively documented case of fake OSINT feeding directly into a mainstream media failure with global consequences. An explosion at the hospital grounds was initially attributed by Palestinian officials to an Israeli airstrike. Anonymous OSINT accounts on Twitter began circulating analysis of available imagery within minutes, a number of them concluding that the explosion evidence matched the Israeli strike narrative. Large accounts with hundreds of thousands of followers amplified these conclusions as verified analysis. Wire agencies and broadcast networks, under intense competitive pressure to report the story, cited the social media analysis in their initial coverage. The New York Times subsequently conducted a detailed independent investigation and concluded that the evidence pointed to a failed rocket launch rather than an Israeli airstrike. By that point, the original claim had embedded itself globally. The correction required far more time and effort to reach an equivalent audience than the initial false claim had needed to go viral.

Al Jazeera’s analyst Idrees Ahmad described the dynamics of that episode with precision: once a respected figure in the OSINT community had endorsed the initial theory, a groupthink developed among other accounts who endorsed it in turn, each reinforcing the others’ credibility in a closed loop. When the New York Times investigation contradicted the consensus, accounts that had committed publicly to the theory looked for rationalisations to preserve it rather than updating their position. The expert reputation had become attached to the conclusion and could not be separated from it without cost. This is the journalistic equivalent of the daisy chain verification problem described earlier in this article. It operates at exactly the same level of methodological failure, but with major newsrooms as the final amplification layer instead of Twitter accounts.

A separate category of mainstream media error involves the misuse of open-source tracking platforms that have themselves been manipulated. Researchers at Cardiff University documented the case of a maritime AIS tracking incident where Russian ships appeared to have behaved anomalously according to data displayed on public vessel tracking websites. Multiple mainstream outlets published stories attributing this to GPS spoofing, cyberattacks, or deliberate interference. The actual explanation, as analysed by specialists who understood how AIS reporting works, was that the tracking website’s volunteer submission system had been seeded with false data. The mainstream media stories had not cited a primary source. They had cited a tracking website that anyone could manipulate. The error propagated through multiple news cycles before the technical explanation reached the same outlets that had initially published the false version.

The structural reason mainstream media gets fooled is not incompetence. It is speed combined with the loss of area expertise. As newsrooms have cut foreign correspondents and regional specialists over the past fifteen years, the institutional knowledge that would allow an editor to immediately question a suspicious claim from a conflict zone has been hollowed out. What replaced it in breaking coverage is social media monitoring, a practice that works well when the accounts being monitored are credible and works catastrophically when they are not. The fake OSINT accounts understood this transition and positioned themselves to fill the expertise gap. They speak in the language of verified analysis. They use the formatting conventions of intelligence assessment. They present with confidence. Newsrooms under deadline pressure cite them. The disinformation reaches the front page. The correction runs on page twelve two days later.

🔴 Further Documented Cases

Additional Named Operations and Accounts Exposed Since 2022

Aurora Intel, Israel Radar, and ELINT News — The Hasbara Distribution Network. Al-Shabaka policy researchers identified these three accounts specifically as sourcing their information uncritically and almost exclusively from Israeli military sources, overreporting Palestinian armed activity while systematically underreporting Israeli structural violence and documented strikes on civilian infrastructure. The accounts present themselves as neutral technical analysis operations. Israel Radar’s name implies radar data tracking. ELINT News suggests signals intelligence. Neither operates anything resembling a radar system or signals collection capability. They read IDF statements and post them in OSINT formatting. The credibility gap between what the name implies and what the account actually does is the entire fraud.

Gazawood and the Pallywood Disinformation Account. An account called Gazawood was investigated by Forbidden Stories, The Seventh Eye, and Radio France Internationale and found to be operated by Idan Knochen, an ultra-Orthodox Jewish author from Jerusalem. The account’s stated purpose was fact-checking claims about Gaza. Researchers found that only 5.75 per cent of the account’s content was legitimate fact-checking by any recognisable definition. The remaining content used the visual language of fact-checking, red circles, labels, annotations, to dismiss verified evidence of casualties and destruction as staging or fabrication. The former Director General of Israel’s Ministry of Strategic Affairs was identified among those associated with the account. This is state-adjacent disinformation dressed as citizen investigation, using the OSINT aesthetic to laundering official denial into the fact-checking space.

Russian Warfakes Network — 700,000 Telegram Subscribers, Coordinated Cross-Platform Distribution. The Warfakes network, operating simultaneously on Telegram and through its website waronfakes.com, built over 700,000 Telegram subscribers across a cluster of regional channels that all launched within the first five days of the 2022 Ukraine invasion. The content was shared across channels simultaneously, translated into multiple languages, and pushed to Twitter and VKontakte. The network described its mission as fighting fake news. Its actual function, documented by academic researchers at the University of Amsterdam, was to deny Russian war crimes by applying the investigative aesthetic of OSINT to predetermined conclusions. The Bucha massacre, documented by satellite imagery, witness testimony, and forensic investigation, was described as a staged production in multiple Warfakes posts. The network remains active.

Amjad Taha and the Emirati Dysinfluencer Factory. Researcher Marc Owen Jones, publishing in December 2025, documented the operation of Amjad Taha, a figure with significant Twitter presence who functioned as a hub for an Emirati state-adjacent network of influence accounts pushing specific regional narratives. The operation used coordinated amplification, where accounts that appeared independent would simultaneously engage with Taha’s posts in patterns consistent with automated or directed behaviour rather than organic interest. The content mixed legitimate regional commentary with narratives aligned with specific UAE geopolitical interests. The account’s apparent independence was the product. Its actual coordination was invisible to the casual follower.

The Moldova Election AI Operation — ChatGPT Writing Kremlin Propaganda for Pay. Ahead of Moldova’s September 2025 parliamentary election, a Russian-funded disinformation network was uncovered that paid ordinary users to post pro-Kremlin content on social media, with ChatGPT providing guidance on message framing, including recommendations on the use of satirical elements to improve engagement. A fake AI-generated platform called Restmedia, publishing Kremlin-aligned content with IP addresses linked to Russia, paid engagement farms in Africa to promote its narratives through verified social media accounts in an amplification-for-hire scheme. This is the complete industrialisation of the fake OSINT and fake news model. State funding, AI writing assistance, paid amplification farms, verified account purchasing, all assembled into a single coordinated operation targeting a specific election in a specific country on a specific date.

Strategy Battles — The Pattern Across Every Case

“Every single operation documented above uses the same core deception: the name implies capability that the account does not have. ELINT News does not collect signals intelligence. Israel Radar does not track radar. Warfakes does not fact-check. Gazawood does not investigate. The name is the product. The audience buys the name and receives something entirely different in return.”

8M+

Deepfake Videos Projected Online by End of 2025

100M+

Views of AI-Generated Fake Conflict Footage in Iran-Israel 2025

$42.7M

Ad Revenue Earned by Top 40 US Disinformation Sites in 2022 Alone

Strategy Battles Assessment

The fake OSINT problem on Twitter and X is not a minor credibility issue. It is a structural feature of the conflict information environment that benefits adversarial actors and degrades public understanding of wars that matter. The accounts that have grown the largest audiences have done so not by being more accurate than their competitors but by being faster and more confident. Confidence without methodology is not analysis. It is performance. And conflict performance, presented as verified intelligence during an active war, is dangerous.

Strategy Battles operates by a different standard. Every article on this site is sourced to named, hyperlinked primary sources. Every claim that cannot be independently verified is labelled as unverified. Russian territorial claims are consistently labelled as claims. Corrections are published and retained. The editor is named and contactable. This is not exceptional. It is the baseline that any outlet describing itself as OSINT-based should meet. The fact that most Twitter accounts with OSINT branding fall far short of this baseline is the core of the problem.

The solution is not to stop reading conflict analysis on social media. The solution is to demand, from every account and every outlet, the same standard: show your sources, explain your methodology, correct your errors, and be honest about what you do not know. If an account cannot or will not meet that standard, its follower count is irrelevant. It is not a source. It is noise.

✅ Strategy Battles — Verified Analyst Directory

Follow These. Ignore the Rest.

This is not a comprehensive list. It is a starting point for readers who want genuine analysis and have spent too long following accounts that were giving them something else. Every entry below discloses methodology, cites sources, publishes corrections, and has a verifiable track record in the conflict OSINT space.

OSINT Intuit / @UKikaski

✅ LEAD RECOMMENDATION

Conflict OSINT / Military Hardware / Order of Battle / Geopolitical Analysis

The standard-bearer for how public conflict OSINT analysis should be done. UKikaski publishes with visible methodology, cited sources, explicit confidence levels, and public corrections. He covers airframe identification, munitions classification, military hardware, order-of-battle tracking, and broader geopolitical dynamics across multiple active conflict theatres. He is willing to reach and publish conclusions that contradict official government narratives when the evidence demands it, without sensationalising and without abandoning the evidentiary chain. His output is measured rather than relentless. Every post adds something the wire does not already contain. Strategy Battles considers him one of the most credible independent OSINT voices operating publicly on X.

Methodology Disclosed
Corrections Published
Primary Sources Cited
Uncertainty Acknowledged

Bellingcat

Investigative Geolocation / Open Source Investigation

@Bellingcat ↗

The organisation that defined the modern standard for open-source investigative journalism. Founded methodology-first, publishes full sourcing, has broken major investigations on flight MH17, chemical weapons in Syria, the Salisbury poisonings, and dozens of other cases where official narratives collapsed under open-source scrutiny. Hires experienced researchers and maintains editorial standards equivalent to a professional newsroom.

Institute for the Study of War (ISW)

Order of Battle / Ukraine War / Conflict Mapping

@TheStudyofWar ↗

Professional research organisation producing daily Ukraine conflict assessments with sourced, map-based order-of-battle tracking. Methodology is published, analysts are named, assessments are time-stamped and archived. One of the very few sources in the conflict space that consistently applies verifiable standards to territorial claim reporting.

ACLED — Armed Conflict Location and Event Data

Conflict Data / Event Tracking / Global Coverage

@ACLEDinfo ↗

The global standard for conflict event data. Every event is coded, sourced, and archived with full methodology disclosure. ACLED does not post breaking claims. It provides the structured, verifiable conflict data that makes genuine analysis possible. An essential reference point for anyone wanting to understand conflict patterns rather than just individual events.

Calibre Obscura

Weapons Identification / Arms Tracking / Conflict Hardware

@CalibreObscura ↗

Specialist weapons identification and arms tracking account with a documented track record for accuracy and methodology disclosure. Known for precise, sourced work on weapon provenance, ammunition types, and equipment identification in active conflict zones. One of the analysts who was early to challenge the CanadianUkrain1 fraud documented earlier in this article. Publishes corrections. Engages with challenges to its analysis rather than ignoring or blocking them.

This directory will be updated as Strategy Battles’ editorial team identifies additional analysts meeting the methodology and disclosure standards described in this article. Inclusion requires: named or consistently pseudonymous identity with verifiable track record; published methodology; public corrections record; primary source citation; and demonstrated independence from state or commercial narrative interests. Suggestions can be submitted via the contact page.


Editorial Verification

This analysis is based on direct observation of public Twitter and X accounts from February 2022 to April 2026, cross-referenced against named wire services, verified OSINT methodology standards published by Bellingcat and the Stanford Internet Observatory, and ACLED and ISW-CTP methodology disclosures. Additional sourcing includes investigative reporting by 404 Media, The New Arab, Al Jazeera, Rest of World, Rolling Stone, DFRLab, Haaretz, and the New York Times on specific named operations. Academic research from Cardiff University CREST, the Institute of Network Cultures (University of Amsterdam), the Alan Turing Institute CETAS, Hozint, and the Stimson Center on AI-generated disinformation. All accounts named in this article are public-facing and their operations have been documented by named independent investigative bodies. AI deepfake statistics sourced to NewsGuard, Stimson Center, and Hozint published research. Monetisation figures sourced to documented US election disinformation revenue research. Telegram pipeline dynamics sourced to Rolling Stone and EU DisinfoLab published analysis. Strategy Battles editorial standards and OSINT compliance methodology are disclosed in the OSINT badge at the top of every article on this site.

Approved for Publication

Marcus V. Thorne
Lead Editor, Strategy Battles

©StrategyBattles.net 2026

This article is for news and analysis purposes only. Based on publicly available open-source reporting, published OSINT methodology standards, and platform-observable account behaviour. All rights reserved. Not for commercial reuse without permission.

Strategy Battles Editorial Team

Strategy Battles is led by Marcus V. Thorne, a military analyst and open-source intelligence specialist with over a decade of operational experience in defence logistics and tactical conflict reporting. Marcus oversees the editorial direction of every report published on Strategy Battles, applying a rigorous multi-stage verification process designed to deliver accurate, accountable journalism in an information environment increasingly defined by wartime disinformation.
Back to top button