Skip to main content
The AI engine turns raw signals into explanations and risk classifications. It is designed to be conservative and transparent.

Model families

  • Classification models for phishing, malware, and approval abuse.
  • Sequence models for transaction intent and value movement.
  • Graph models for address reputation and interaction risk.

Inputs

  • URL and DOM signals from the active page
  • Transaction intent and decoded call data
  • Public contract metadata and bytecode features
  • Reputation signals for addresses and domains

Processing stages

  1. Feature extraction normalizes inputs for models.
  2. Classification models estimate the likelihood of threat types.
  3. A rule layer applies guardrails for blocking and warnings.
  4. An explanation layer generates human-readable summaries.

Model governance

  • Models are versioned and can be rolled back
  • Automated checks verify stability before release
  • High-impact decisions require multiple signals

Explainability

You always see a short summary of the strongest signals that drove a decision. When available, you can expand a detailed view to see which rules and model scores were triggered.

Evaluation and monitoring

The engine logs model performance metrics such as precision and false positives. You can expect retraining and model upgrades to be gradual and reversible.

Fallback behavior

When model confidence is low, ChainGuard will bias toward warning instead of blocking. This keeps you in control while still surfacing potential risk.

Next steps