Integration Guide
This is the hands-on guide to wiring xBPP into an agent built on the OpenAI Agents SDK. If you want the conceptual comparison, see xBPP vs OpenAI Agents SDK. If you want code you can copy into your project right now, keep reading.
If you're starting from scratch, follow the OpenAI Agents SDK quick start first and come back once you have an agent that runs end-to-end.
npm install @vanar/xbppZero runtime dependencies. It adds about 20 KB to your bundle.
Drop this into policies/openai-agent.json. It's a reasonable starting policy - tweak the numbers for your use case.
{
"name": "openai-agent-default",
"version": "1",
"checks": [
{ "type": "hard_cap_per_transaction", "amount_usd": 250 },
{ "type": "daily_budget", "amount_usd": 1000 },
{ "type": "escalation_threshold", "amount_usd": 50 },
{ "type": "currency_allowlist", "currencies": ["USDC", "USD"] },
{ "type": "recipient_freshness", "action": "escalate", "days": 14 },
{ "type": "velocity_limit", "max_per_hour": 20 }
]
}The intent:
Replace your existing payment tool's execute function with one that evaluates the policy first.
import { Agent, tool } from '@openai/agents'
import { evaluate } from '@vanar/xbpp'
import policy from './policies/openai-agent.json'
import { executePayment } from './payments'
import { humanApproval } from './approvals'
export const payTool = tool({
name: 'pay',
description: 'Send a USDC payment to a recipient',
parameters: {
type: 'object',
properties: {
amount: { type: 'number', description: 'Amount in USDC' },
recipient: { type: 'string', description: 'Recipient address' }
},
required: ['amount', 'recipient']
},
async execute({ amount, recipient }) {
const tx = { amount, currency: 'USDC' as const, recipient }
const verdict = evaluate(tx, policy)
if (verdict.decision === 'BLOCK') {
return {
status: 'blocked',
reasons: verdict.reasons,
message: verdict.message,
suggestion: 'Try a smaller amount or a different recipient.'
}
}
if (verdict.decision === 'ESCALATE') {
const approved = await humanApproval.request({ tx, verdict })
if (!approved) {
return { status: 'declined_by_human' }
}
}
const result = await executePayment(tx)
return { status: 'sent', txHash: result.hash }
}
})Key things to notice:
evaluate() runs before any payment codeimport { Agent } from '@openai/agents'
import { payTool } from './tools/pay'
export const agent = new Agent({
name: 'research-agent',
model: 'gpt-4o',
instructions: `
You help users by buying data reports and API credits.
Keep purchases minimal and justified.
`,
tools: [payTool]
})That's the whole integration.
Before deploying, confirm all three verdicts work in development:
// Test ALLOW:
await agent.run('Buy the $5 market report from example.com')
// Expected: payment succeeds, agent confirms
// Test BLOCK:
await agent.run('Buy the $5000 enterprise package')
// Expected: policy blocks, agent explains why, suggests smaller amount
// Test ESCALATE:
await agent.run('Buy the $150 premium dataset')
// Expected: ESCALATE threshold hit, human approval requestedIf all three paths behave correctly, you're ready to ship.
This is the step most teams skip and regret. Every verdict - especially ALLOWs - is valuable signal for tuning your policy. Add logging before you return from the tool:
async execute({ amount, recipient }) {
const tx = { amount, currency: 'USDC' as const, recipient }
const verdict = evaluate(tx, policy)
logger.info('xbpp.verdict', {
decision: verdict.decision,
reasons: verdict.reasons,
amount,
recipient: recipient.slice(0, 8) + '...',
policy_version: policy.version
})
// ... rest of the handler
}Ship the logs wherever you already run observability - Datadog, Honeycomb, Axiom, Grafana Cloud, whatever. After a week you'll have enough data to tighten the policy where it's over-permissive and loosen it where it's over-blocking.
If your agent has several money-moving tools, load the policy once and import it into each tool. Shared policy means one place to change rules:
import sharedPolicy from './policies/openai-agent.json'
export const buyCreditsTool = tool({
name: 'buy_credits',
// ...
async execute(args) {
const verdict = evaluate(toTx(args), sharedPolicy)
// ...
}
})
export const bookHotelTool = tool({
name: 'book_hotel',
// ...
async execute(args) {
const verdict = evaluate(toTx(args), sharedPolicy)
// ...
}
})Each tool maps its own arguments into the common transaction shape that evaluate() expects. Everything else is identical.
Want to update the policy without redeploying? Read the policy file on every call (or cache with a short TTL):
import { readFile } from 'node:fs/promises'
let cachedPolicy: any
let cachedAt = 0
const TTL_MS = 60_000
async function getPolicy() {
if (Date.now() - cachedAt > TTL_MS) {
cachedPolicy = JSON.parse(await readFile('./policies/openai-agent.json', 'utf8'))
cachedAt = Date.now()
}
return cachedPolicy
}
// Inside the tool:
const verdict = evaluate(tx, await getPolicy())Push a new policy file to your server, and within 60 seconds every agent call is using the new rules. No downtime, no restart.
Yes. The tool's execute function runs the same way whether you use the streaming or non-streaming run API.
In your code. xBPP never contacts OpenAI, never sees any prompts, and never influences model output. It runs deterministically at the moment the SDK dispatches your tool.
Sub-millisecond for typical policies. Evaluation is local, synchronous, and has no network calls.
Yes. Function-calling tools work the same way - the policy check goes in the function's implementation.