"Verification" is one of those words that sounds clear until a dispute lands on your desk.
When an AI agent books a SaaS subscription your team didn't request, or triggers a retry loop that charges the same merchant six times, the post-mortem always comes back to the same questions: who authorized this, what did the agent say it was doing, and can we prove that what happened matches what was intended?
Those three questions define what agent payment verification actually needs to cover. Engineers tend to mean cryptographic identity. Finance teams mean audit trail. Security teams mean authorization chain. A definition that survives contact with all three is: verification means you can prove who acted, what was authorized, and why the transaction happened.
Each layer in that definition does different work. A gap in any one of them leaves you exposed.
Identity: which agent acted?
The first question a payment audit asks is who did this. For agent payments, that means stable, persistent identifiers for the agent itself, the developer or system that deployed it, and the policy it was running under.
Cryptographic identity is the strongest form. Visa's Trusted Agent Protocol uses this model: agents register public keys before initiating transactions, and merchants verify signatures at checkout. But even a simple, stable agent ID beats an anonymous runtime session. The goal is that any transaction can be traced to a specific agent configuration - not just a cloud function that could have been anything.
Without stable identity, authorization means nothing. You cannot enforce a policy against an agent you cannot identify.
Attestation: is the agent in an allowed state?
Identity proves who the agent is. Attestation proves it's in a condition where spending is appropriate.
Attestation-before-access means the payment system checks prerequisites before releasing any credentials: KYC completed where required, funding present, policy attached, no active incident flags. This is the layer that prevents a newly deployed agent from silently spending in a new environment before anyone has reviewed its permissions. It forces an explicit decision - "yes, this agent is authorized to spend right now" - rather than defaulting to access.
The distinction matters because compromised agents and credential leaks are real threats. An agent that fails attestation gets no credentials to leak.
Intent: what was the agent trying to do?
Intent is the core verification primitive for spend, and it's the layer most often skipped.
Without a declared intent, you cannot enforce meaningful policy at decision time - your rules engine is reacting to transaction requests rather than evaluating whether the agent's goal is appropriate. You cannot explain the spend later, because all you have is a transaction descriptor and an amount. And you cannot distinguish between fraud and a well-intentioned agent that misinterpreted its objective.
The intent object should be structured and logged before credentials are released. It captures the merchant, expected amount, and purpose. That record becomes the ground truth for everything downstream - reconciliation, disputes, audits.
Hard controls: what is impossible?
Verification cannot be purely after-the-fact. After-the-fact detection is useful, but it means money has already moved. Hard controls define what cannot happen regardless of what the agent requests.
Funding isolation is the most important. A dedicated balance per agent or workflow means the blast radius is the card's balance, not your credit line. Merchant locks prevent transactions at merchants outside a declared allowlist. Velocity caps stop retry loops from turning a single error into dozens of charges.
These constraints work at the network level - enforced at authorization time, not in a monitoring dashboard. Proxy, for example, enforces merchant locks and velocity caps at the card level, so a misbehaving agent hits a wall before any charge processes. Soft monitoring tells you something went wrong. Hard controls prevent it.
Evidence logs: can you explain the charge?
The final layer is the paper trail that survives disputes, audits, and post-mortems.
A complete evidence record links:
- ▸agentId
- ▸intentId and declared purpose
- ▸policy decision
- ▸cardId (not PAN)
- ▸transactionId
- ▸merchant and descriptor
- ▸amount and currency
- ▸timestamps for each step
If you can reconstruct the sequence of events in minutes, you have verification. If you have a transaction ID and no context for why it happened, you have trust. The difference matters most when disputes arrive. Chargeback processes were designed for human cardholders who made conscious purchasing decisions. An agent transaction requires an evidence chain: a human set the policy, the agent declared intent within it, the system approved it, and the transaction matched the declaration.
What "verified" looks like in practice
A minimal definition of done for verified agent purchases:
- ▸No spend without a logged intent
- ▸No credentials released without an intent ID and declared purpose
- ▸Card locked outside the active transaction window
- ▸Transaction verified against intent on settlement
- ▸Evidence record persisted and queryable
That is the verification bar for production. Not a single feature - a system where each layer catches the failures the previous one cannot.
Related guides
- ▸Attestation-before-access: a pattern for safe agent spending
- ▸AI agent chargebacks: who's liable?
- ▸Why AI agents should never share your payment credentials
Looking for agent spending controls? Start with virtual cards, then choose a plan that fits your workload.