Coordinated inauthentic behavior — CIB — entered the public vocabulary in 2018 when Facebook's security team used the term in its first public takedown report. Since then, it has appeared in congressional testimony, platform transparency reports, academic research, and the briefings of communications and security professionals at hundreds of organizations. It is also, with notable frequency, misunderstood — used as a synonym for 'bots,' conflated with misinformation, or treated as interchangeable with other terms from the disinformation research vocabulary. This explainer is written for practitioners who need a precise, operational understanding of what CIB is, how it is distinguished from adjacent phenomena, and what detection looks like in practice.
The formal definition is worth starting from. Coordinated inauthentic behavior involves multiple accounts acting in concert — that is, the coordination component — to artificially amplify narratives or manipulate public discourse — that is, the behavior component — while concealing the coordinated nature of the activity — that is, the inauthenticity component. All three elements are necessary. Activity that is coordinated but transparent — an advocacy organization's official accounts promoting its own content — is not CIB. Activity that is inauthentic but not coordinated — a single account posting misleading content — is not CIB. What distinguishes CIB from both of these is the combination: multiple actors, working together, pretending to be independent.
What counts as inauthentic behavior
The 'inauthentic' component of CIB is the element most frequently misunderstood. Inauthenticity in the CIB context refers specifically to the concealment of coordinated origin — actors presenting themselves as independent when they are not, manufacturing the appearance of organic consensus where none exists. It does not refer to the truth-value of the content being amplified. A coordinated network of fake accounts can amplify a true news story. A single authentic individual with no coordination can spread fabricated information. These two phenomena have different causal structures, different operational implications, and require different responses. CIB is specifically about the former: manufactured independence, not manufactured facts.
The practical boundary that matters for detection is whether observed accounts are behaving as independent humans would behave if they had independently reached the same conclusion, or whether the behavioral patterns are only explicable by coordination. Organic users who share the same news story do so with temporal variation, language variation, and network variation that reflects their different starting points, schedules, and social networks. Coordinated accounts sharing a templated message in a narrow time window, cross-amplifying each other at rates inconsistent with their apparent audience sizes, and showing account creation dates clustered in a short window before a trigger event — these patterns are inconsistent with organic independent behavior and consistent with coordination.
The coordination component
Coordination in CIB operations exists on a spectrum from fully automated to semi-automated to human-directed. At the fully automated end, bot networks post and repost content programmatically, with posting schedules, content templates, and amplification targets set by operators. These operations leave the clearest behavioral signatures — mechanical regularity in posting cadence, identical or near-identical content, and account characteristics (age, follower ratios, posting history) that are inconsistent with organic human behavior. At the semi-automated end, human operators use software tools to schedule and coordinate posts across many accounts, introducing some variation to reduce signature legibility while maintaining the coordination that gives the operation its scale advantage.
The hardest form of coordination to detect is human-directed networks: teams of real people operating multiple accounts in coordination, following messaging guidelines, and timing their activity to appear organic. These operations — often called 'troll farms' or 'influence farms' in research literature — generate behavioral signatures that are closer to organic activity but still detectable through network topology analysis, cross-platform correlation, and the statistical improbability of independent actors converging so precisely on the same frames, timing, and amplification targets. Rolli IQ's detection methodology focuses on the network and timing signatures that persist even in human-directed operations, rather than relying solely on the content-level analysis that is easily defeated by varied messaging.
Detection approaches used by platforms and researchers
Platform detection of CIB has evolved significantly since 2016, but it operates under structural constraints that limit its effectiveness. Platforms detect CIB primarily through signals that require observation over time: account behavioral patterns accumulated across many posts, network relationships established through follow/amplification history, and technical infrastructure signals (shared IP addresses, device fingerprints, and API patterns) that indicate coordinated operation. These signals require data that takes time to accumulate and that most external researchers cannot access. Platform takedowns therefore typically occur weeks to months after a campaign activates — long after the campaign has achieved its primary operational objectives.
Independent detection approaches — the kind used by academic researchers and commercial platforms like Rolli IQ — focus on the behavioral signatures that are observable in public data: posting timing, language patterns, network structure, and cross-platform correlation. The key methodological insight is that coordination leaves statistical traces even when individual posts appear organic: the probability that multiple independent actors would post near-identical content within narrow time windows, cross-amplify each other at the rates observed, and show account activation patterns clustered in the same narrow window before a trigger event is low enough to constitute positive evidence of coordination without requiring the technical infrastructure signals that only platforms can see. This public-data detection approach is necessarily probabilistic — it produces confidence levels rather than certainties — but the confidence levels are operationally useful for communications and security teams who need to make response decisions before platform enforcement produces a definitive verdict.
See it in practice
Detect coordinated campaigns before they reach mainstream press.
Rolli IQ scores every spike for authenticity across 8 platforms — free trial, no credit card.
Conclusion
CIB is not a technical curiosity for platform trust-and-safety teams. It is an active threat to organizational decision-making, public discourse, and democratic information integrity. The practitioners — communications professionals, security analysts, researchers, and journalists — who understand precisely what it is, how it differs from adjacent phenomena, and what its behavioral signatures look like are better equipped to detect it, respond to it appropriately, and explain it accurately to the stakeholders who need to act on that understanding. The vocabulary matters because precision in diagnosis is the prerequisite for precision in response.
Related reading
About the Author
Policy Research Director
Harvard Shorenstein Center affiliate. PhD in Political Communication from MIT. Dr. Patel's research on disinformation and democratic resilience has been published in the Harvard Kennedy School Review and the Journal of Political Communication.