Payment Governance for LangChain Agents
LangChain makes it straightforward to give an agent a pay() tool. It does not make it straightforward to make sure the agent spends responsibly. That's what xBPP is for. This guide walks through the two common LangChain setups - AgentExecutor and LangGraph - and shows exactly where to drop in the policy layer.
npm install langchain @vanar/xbpp
# or, for Python
pip install langchain vanar-xbppxBPP has zero runtime dependencies. It runs alongside LangChain without touching your existing stack.
The classic LangChain pattern is an AgentExecutor calling Tool instances. Put xBPP inside the tool's func.
import { DynamicStructuredTool } from '@langchain/core/tools'
import { z } from 'zod'
import { evaluate } from '@vanar/xbpp'
import policy from './policies/langchain-agent.json'
const payTool = new DynamicStructuredTool({
name: 'pay',
description: 'Send a USDC payment to a recipient address',
schema: z.object({
amount: z.number().describe('Amount in USDC'),
recipient: z.string().describe('Recipient address or URL'),
memo: z.string().optional()
}),
async func({ amount, recipient, memo }) {
// Evaluate before any money moves
const verdict = evaluate(
{ amount, currency: 'USDC', recipient, memo },
policy
)
if (verdict.decision === 'BLOCK') {
return JSON.stringify({
status: 'blocked',
reasons: verdict.reasons,
message: verdict.message
})
}
if (verdict.decision === 'ESCALATE') {
const approved = await humanApproval.request({ amount, recipient, verdict })
if (!approved) {
return JSON.stringify({ status: 'declined_by_human' })
}
}
// ALLOW (or escalation approved)
const tx = await executePayment({ amount, recipient, memo })
return JSON.stringify({ status: 'sent', txHash: tx.hash })
}
})Register payTool in your agent's tool list and you're done. The LLM calls the tool as usual; xBPP intercepts the call before any payment leaves.
For LangGraph-based agents, the policy layer naturally becomes its own node in the graph. This makes the governance flow visible in the graph itself - a huge plus for auditability.
import { StateGraph, END } from '@langchain/langgraph'
import { evaluate } from '@vanar/xbpp'
import policy from './policies/langchain-agent.json'
type GraphState = {
pendingPayment?: { amount: number; currency: string; recipient: string }
verdict?: 'ALLOW' | 'BLOCK' | 'ESCALATE'
reasons?: string[]
result?: unknown
}
async function evaluatePolicy(state: GraphState): Promise<Partial<GraphState>> {
if (!state.pendingPayment) return {}
const v = evaluate(state.pendingPayment, policy)
return { verdict: v.decision, reasons: v.reasons }
}
async function executePayment(state: GraphState): Promise<Partial<GraphState>> {
const tx = await pay(state.pendingPayment!)
return { result: tx }
}
async function escalateToHuman(state: GraphState): Promise<Partial<GraphState>> {
const approved = await humanApproval.request(state)
return { verdict: approved ? 'ALLOW' : 'BLOCK' }
}
const graph = new StateGraph<GraphState>({ channels: { /* ... */ } })
.addNode('evaluatePolicy', evaluatePolicy)
.addNode('executePayment', executePayment)
.addNode('escalateToHuman', escalateToHuman)
.addConditionalEdges('evaluatePolicy', (state) => {
if (state.verdict === 'ALLOW') return 'executePayment'
if (state.verdict === 'ESCALATE') return 'escalateToHuman'
return END // BLOCK
})
.addEdge('escalateToHuman', 'evaluatePolicy') // re-check after approval
.addEdge('executePayment', END)
.setEntryPoint('evaluatePolicy')This gives you three visible nodes in the graph - evaluatePolicy, executePayment, escalateToHuman - with the routing logic between them spelled out as edges. Every payment flows through the same governance path, and you can swap the policy file without touching the graph.
A few things worth knowing when you wire xBPP into LangChain:
LangChain agents reason better about structured tool outputs than thrown exceptions. When xBPP BLOCKs a transaction, return a JSON string with status, reasons, and message - the LLM will see the reasons and can explain, retry, or fall back. Throwing new Error(...) usually crashes the agent loop.
DynamicStructuredTool gives you a real Zod schema for tool inputs. This means LangChain validates the input before your function runs, so when xBPP's evaluate() sees the payment request, all fields are guaranteed present and typed. Saves a whole class of bugs.
xBPP evaluation should happen inside the func body, not in a Zod transform. Transforms run too early - LangChain retries schema failures and you'll get phantom double-evaluations.
Whatever JSON you return from the tool becomes the next turn's context. Make it readable to the LLM:
return JSON.stringify({
status: 'blocked',
reason: 'Amount exceeds daily budget',
remaining_budget_usd: verdict.remainingBudget,
suggestion: 'Try again with a smaller amount or wait until tomorrow.'
})A good block message turns a policy rejection into a useful reasoning step. A bad one turns it into an infinite retry loop.
If your LangChain agent has several money-moving tools (say, pay_usdc, buy_api_credits, book_hotel), use a single shared policy file and import it into each tool. This keeps the governance surface unified - one place to change rules, one audit trail.
// policies/langchain-agent.json
import sharedPolicy from './policies/langchain-agent.json'
const payUsdcTool = new DynamicStructuredTool({ /* ... */,
async func(input) { const verdict = evaluate(input, sharedPolicy); /* ... */ }
})
const buyCreditsTool = new DynamicStructuredTool({ /* ... */,
async func(input) { const verdict = evaluate(input, sharedPolicy); /* ... */ }
})If different tools genuinely need different policies (e.g. a tight policy on buy_hotel and a loose one on buy_api_credits), load each from its own file. But start with one shared policy and split only when you have evidence the behaviors should diverge.
LangChain's callback system is the right place to emit xBPP verdicts for observability. A minimal integration:
import { BaseCallbackHandler } from '@langchain/core/callbacks/base'
class XbppCallback extends BaseCallbackHandler {
name = 'xbpp-logger'
async handleToolEnd(output: string) {
try {
const parsed = JSON.parse(output)
if (parsed.status === 'blocked' || parsed.status === 'declined_by_human') {
metrics.increment('xbpp.policy_denial', { status: parsed.status })
}
} catch {}
}
}Pipe this into Datadog, Honeycomb, or whatever you're already using. The denials are your highest-signal stream for tuning the policy.
Yes - the xBPP package is available in both TypeScript and Python, and the integration shape is identical.
No. xBPP has zero runtime dependencies. It works with LangChain, without LangChain, and with LangChain + anything else.
Evaluation happens when the tool's func runs, which LangChain invokes after the streaming tool call is complete. No special handling needed.
Yes - emit the verdict as a structured tool output and LangSmith will capture it as part of the trace. No custom integration required.