Skip to main content

ARDA · Discovery Mode IV

Causal Dynamics
Engine

Discovers directed causal structures from observational and interventional data. Separates genuine causation from correlation. Designs experiments to resolve ambiguity. Reports what the evidence can and cannot support.

Patent pending in the United States and other countries.

The Problem

Correlation is not causation.
CDE makes the distinction.

Standard models reveal associations. They cannot tell you what would happen if you intervened, what you should measure next, or where the evidence runs out. CDE can.

Cause vs. correlation

When gene A and protein B co-vary, a standard model cannot tell whether A causes B, B causes A, or both are driven by an unmeasured confounder. CDE recovers directed causal edges — not undirected associations.

Experiment design under ambiguity

When the observational data leaves multiple causal graphs equally plausible, CDE designs targeted experiments that maximally reduce structural uncertainty — so you spend resources on the measurements that matter most.

Intervention consequences

Predicting what happens when a mechanism is perturbed requires causal structure, not correlation. CDE identifies what would change — and what would not — if an upstream variable were intervened upon.

Identifiability and honest limits

Not every causal question is answerable from the available data. CDE explicitly reports what the current data can and cannot distinguish, so you know exactly where the evidence is strong and where ambiguity remains.

Architecture

How CDE works

CDE learns system dynamics and causal structure simultaneously through a dual-representation architecture. It does not fit a causal graph to a frozen model — the causal structure and the dynamics co-evolve.

01

Dual dynamics

CDE separates system dynamics into two components: a universal background that captures smooth, domain-independent behavior, and a directed causal component that flows along discovered connections. If a connection is absent, no causal contribution is transmitted.

02

Bayesian graph belief

CDE maintains a posterior distribution over possible causal structures. Rather than committing to a single graph, the system carries uncertainty across all plausible wirings and reports calibrated edge probabilities.

03

Active experiment design

An information-theoretic probe ranker scores candidate experiments by how much they would reduce structural uncertainty. The result is a ranked list of experiments ordered by expected information gain.

04

Conservative claim emission

Causal edges are reported only when evidence is strong. Every claim carries provenance, survives negative controls, and is gated by identifiability and path-law diagnostics. When evidence is insufficient, CDE emits an IndeterminacyClaim instead of a weak CausalClaim.

05

Trust and governance

Identifiability analysis, path-law validation, and out-of-distribution monitoring run continuously. Claims carry confidence caps that tighten automatically when distribution shift is detected or when path fidelity falls below thresholds.

Dual Representation

Dynamics and structure, learned together

Most causal discovery methods work on static snapshots or require a fixed model before graph inference begins. CDE takes a different approach: it learns continuous dynamics and causal structure as a single, coupled system.

The base component captures smooth, universal dynamics — the behavior that would exist with no causal interactions. The causal component adds directed contributions that flow through discovered connections. Remove an edge, and the corresponding contribution vanishes. This separation means every causal claim is grounded in a specific, testable dynamic contribution.

Base Component

  • Smooth universal dynamics
  • Time-invariant background behavior
  • Stability analysis and diagnostics
  • Domain-agnostic background physics

Causal Component

  • Directed causal contributions along discovered edges
  • Gating per directed connection
  • Causal influence analysis
  • Contributions vanish when edges are removed

Belief System

  • Posterior over graph structures
  • Calibrated edge probabilities
  • Uncertainty quantification per edge
  • Importance scoring with built-in validation

Typed Outputs

Every claim is structured, traceable, and falsifiable

CDE does not produce a single score or an opaque prediction. It emits typed claims — each with provenance, confidence, and the negative controls it survived.

CausalClaim

Directed causal graph with confidence per edge, entropy, node count, and a list of falsifiers that the claim survived.

ExperimentRecommendation

Ranked probe actions with information gain scores, target edge, and human-readable rationale for each proposed experiment.

IndeterminacyClaim

Emitted when evidence is insufficient. Reports candidate graphs, entropy, recommended probes, and a recipe for the next run to resolve ambiguity.

IdentifiabilityClaim

Assesses whether the causal structure is identifiable from the current data. Reports excitation score, intervention coverage, ambiguity score, and weak edges.

PathLawClaim

Validates the learned dynamics against causal structure. Reports path fidelity score, transition mismatch, intervention-response mismatch, and entropy calibration status.

OODResponseClaim

Monitors distribution shift at serving time. Reports severity, trigger type, recommended action, and a confidence cap applied to downstream claims.

Negative Controls

Every causal claim must survive deliberate sabotage

CDE does not simply assert that a causal edge exists. It systematically breaks the causal structure and checks that the model degrades in the ways it should. If a claim survives these controls, the evidence is credible. If it does not, CDE does not report it.

These are not optional post-hoc checks. They run as part of the discovery pipeline and gate claim emission. A CausalClaim that has not passed negative controls is never surfaced.

Null graph

Zeros out the entire causal adjacency. If the causal field contributed real structure, prediction quality must degrade measurably. If it does not, the causal claims are not credible.

Edge permutation

Shuffles edge weights while preserving marginal statistics. Tests whether the specific wiring — not just the presence of edges — carries information. A genuine causal graph is not exchangeable.

Placebo intervention

Randomizes the order of interventions across episodes and re-runs the pipeline on the shuffled data. A valid causal model should produce substantially fewer confident claims from placebo data.

API

Programmatic access to every capability

Every CDE operation is available through ARDA's REST API. Investigate, recommend, intervene, decompose, predict — all as structured API calls that return typed responses.

MethodEndpoint
POST/v1/cde/investigate
POST/v1/cde/recommend_experiment
POST/v1/cde/apply_intervention
POST/v1/cde/decompose
POST/v1/cde/causal_influence
POST/v1/cde/predict
GET/v1/cde/belief_history
GET/v1/cde/theory_revision
POST/v1/cde/evaluate_identifiability
POST/v1/cde/evaluate_path_law
POST/v1/cde/evaluate_ood
GET/v1/cde/trust_state

Part of ARDA

Four modes. One engine. Composable results.

CDE is ARDA's fourth discovery mode. It shares the same typed-claim infrastructure, evidence ledger, and governance framework as Symbolic, Neural, and Neuro-Symbolic modes. Results compose across modes because they share a common scientific contract.

Symbolic

Closed-form equations and conservation laws

Neural

Neural differential equations and latent dynamics

Neuro-Symbolic

Neural + symbolic distillation for interpretable laws

CDE

This page

Directed causal structures from observational and interventional data

A research program may use Symbolic mode to discover a conservation law, Neural mode to represent a complex boundary condition, and CDE to establish the causal pathway connecting an upstream variable to a downstream outcome. These results compose because they share the same typed-claim infrastructure and evidence ledger.

Causation, not correlation.

CDE is available as part of ARDA. Contact us to discuss your causal discovery requirements.

Causal Dynamics Engine (CDE) is patent pending in the United States and other countries. Vareon, Inc.