Agents can spend money. xBPP lets them prove they should. Define spending policies as data - every transaction is evaluated before execution.
import { evaluate, balanced } from '@vanar/xbpp' // On-chain transferconst verdict = evaluate( { amount: 500, currency: 'USDC', recipient: '0x1a2b...3c4d' }, balanced) // verdict.decision → 'ESCALATE'// verdict.reasons → ['ABOVE_ESCALATION_THRESHOLD']// verdict.message → 'Amount exceeds escalation threshold'npm install @vanar/xbppxBPP (Execution Boundary Permission Protocol) provides a verifiable way for autonomous AI agents to prove they're operating within defined spending boundaries. It answers one question: "Should this agent be allowed to spend this money?"
The protocol separates what to check (the spec) from how to enforce (the SDK). Policies are declarative JSON - portable across any implementation, any chain, any runtime.
ALLOW, BLOCK, or ESCALATE. The third option is what makes autonomous agents safe - they can ask for help instead of guessing.
Declarative JSON policies that any implementation can evaluate. No vendor lock-in, no proprietary runtimes.
The reference SDK is 600 lines with zero runtime dependencies. Embed it anywhere - browsers, servers, edge functions.
1,760-line RFC-style spec covering evaluation phases, threat model, and wire format.
Read the SpecTypeScript reference implementation with 12 policy checks, 3 presets, and x402 integration.
View SDK DocsInteractive spec simulator - build policies, run scenarios, and see verdicts in real time.
Try the PlaygroundTransaction is within all policy bounds. Proceed automatically - no human needed.
Transaction violates a hard limit. Stop immediately and return reason codes.
Transaction is in a grey zone. Pause and ask a human before proceeding.