In 2022, AI-generated text was detectable with reasonable accuracy by trained human readers and early classifiers: the outputs of first-generation large language models had characteristic tells — syntactic regularity, generic phrasing, a certain flatness of voice — that distinguished them from human writing with enough reliability to be operationally useful. By 2024, that reliability had eroded substantially. By 2026, for all practical purposes, it is gone. The current generation of LLMs produces text that trained linguists, experienced journalists, and purpose-built classifiers cannot reliably distinguish from human writing at scale. This is not a minor technical refinement. It is a fundamental shift in the detection problem.
The implications for disinformation research and the organizations that depend on it are significant. Detection strategies that were built around identifying the linguistic signatures of AI-generated content — the watermarking proposals, the classifier models, the 'AI detector' tools — have been systematically defeated by model improvements that were predictable and are not finished. The research community's consensus, documented across multiple peer-reviewed publications in 2025, is that text-level detection of AI-generated content is no longer a viable primary detection strategy. The question is what replaces it.
Why Text-Based Detection No Longer Works
The failure of text-based AI detection is not primarily a failure of the detectors — it is a consequence of the capability curve of the models being detected. Every generation of AI writing has been more fluent, more variable, and more contextually calibrated than the last. The detection approaches developed against GPT-3 outputs were outpaced by GPT-4. The approaches developed against GPT-4 were outpaced by the subsequent generation. And the commercial incentive structure for LLM development — which rewards fluency, naturalness, and human-likeness — is precisely the opposite of the incentive structure needed to make AI writing detectable. The models will continue to improve; the text-based detection problem will continue to get harder.
Additionally, the deployment patterns of AI-generated disinformation have evolved to specifically target the weaknesses of text-based detection. Hybrid campaigns — which mix AI-generated content with authentic human posts — have become the dominant architecture for well-resourced operations. By seeding a coordinated network with a mixture of AI-generated seed content and genuine human amplification, operators can defeat classifiers that are calibrated to detect AI writing: the authentic posts provide enough human signal to pull the cluster aggregate above detection thresholds, while the AI-generated content provides the scale and velocity that coordinated human writing alone cannot achieve. Detecting the AI-generated component of a hybrid campaign through text analysis is, in most cases, operationally infeasible.
Behavioral Signals: The New Detection Standard
The research consensus that has emerged from the failure of text-based detection is a shift toward behavioral signals — the observable patterns of how accounts post, amplify, and coordinate, rather than what they say. Behavioral detection has a key advantage over text analysis: it is not defeated by model improvements. An LLM can be trained to produce more natural text; it cannot be trained to make a coordinated network of accounts appear to be independently motivated humans, because the coordination itself is the operational requirement that the campaign cannot abandon without losing its amplification advantage.
The behavioral signals that Rolli IQ's detection methodology prioritizes are the same ones documented in earlier CIB research: posting velocity and temporal clustering, network topology (dense cross-amplification within defined account clusters), account lifecycle patterns (creation clustering, dormancy followed by sudden activation), and cross-platform correlation. What has changed in 2026 is the weight given to velocity as the primary early-warning signal. In the pre-LLM era, content analysis could provide early detection because AI-generated text had detectable characteristics that appeared in the earliest posts. In the post-LLM era, content analysis provides no early-warning advantage. The velocity signature — the near-simultaneous activation of a coordinated network — is now the earliest reliably detectable signal of a coordinated campaign, and detecting it requires monitoring infrastructure that can identify the anomaly in the minutes after activation rather than the hours after it has achieved scale.
Velocity-First Detection in Practice
Velocity-first detection reorients the analytical process: rather than beginning with content analysis to identify suspicious material and then investigating the accounts amplifying it, it begins with velocity anomalies — statistically significant deviations from baseline posting rates in a monitored topic area — and then applies behavioral and content analysis to the anomalous cluster. This approach catches coordinated campaigns at the seeding and bridge stages, before content analysis has anything useful to work with at scale.
Rolli IQ's implementation of velocity-first detection has produced measurable improvements in early-warning performance against hybrid campaigns. In the platform's 2025 validation testing, velocity-first detection identified coordinated campaigns on average 3.2 hours earlier than content-first approaches — a gap that in real-world operations represents the difference between detecting a campaign before and after it achieves mainstream media pickup. For communications and security teams whose operational value depends on the width of the response window, 3.2 hours is not a technical refinement. It is the entire margin between a prepared response and a reactive one. The shift to velocity-first detection, paired with behavioral analysis on the anomalous clusters it surfaces, is the current state of the art in AI-era disinformation detection — and the baseline that serious monitoring programs need to meet.
See it in practice
Detect coordinated campaigns before they reach mainstream press.
Rolli IQ scores every spike for authenticity across 8 platforms — free trial, no credit card.
Conclusion
The disinformation detection challenge of 2026 is not the same challenge that researchers and practitioners were solving in 2020. The models have improved past the point where text-based detection is a viable primary strategy, and the campaigns have adapted to exploit that fact. The field's response — shifting to behavioral and velocity-first detection — is the correct adaptation, and it is one that the detection infrastructure needs to complete urgently. Organizations that are still relying primarily on content-analysis-based tools to identify AI-generated disinformation are operating with a methodology that the threat landscape has already outpaced.
Related reading
About the Author
Rolli Newsroom
Rolli's research team covers narrative intelligence, influence operations, coordinated inauthentic behavior, and cross-platform social analysis.