Now, working with initial contributions from Google and Mastercard, the authentication-focused industry association known as the FIDO Alliance said Tuesday it will launch a pair of working groups to develop industry standards for validating and protecting the authenticity of payments and other transactions carried out by AI agents.
The goal is to produce a preventative baseline that can be adopted across industries. This way, users can authorize agent actions using mechanisms that cannot be easily phished, or hijacked by a bad actor to give rogue instructions to the agent. The standards will also include cryptographic tools that digital services can use to ensure that agents carry out an authorized person’s instructions accurately and legitimately, as well as privacy-preserving frameworks to give users, merchants and other service providers the ability to verify the authenticity of transactions initiated by agents. In other words, the goal of the business is to create protections against customer hijacking or other rogue behavior, as well as a transparency and accountability mechanism for redress in the event of a dispute.
“Agents are becoming more popular, and they’re moving into mainstream use, but pre-existing models aren’t necessarily designed for this type of modeling – they’re not designed to think about actions being performed on behalf of the user,” Andrew Shekiar, CEO of the FIDO Alliance, tells WIRED.
He adds: “If we look back at our work in recent years on the huge problem space around passwords, which arose decades ago. The security foundation of what has become our connected economy was not fit for purpose. We are now on a similar edge with agent proxies, agent interactions, and agent commerce where we have an opportunity to not go down the same path and establish some basic principles that will allow for more reliable interactions.”
Developing technical standards that are broadly applicable across industries and facilitate interoperability is a tedious process that often takes years. But given the rapid progress and adoption of agentic AI, FIDO, Google, and Mastercard representatives all stressed that this process must move more quickly. To this end, both companies are contributing open source tools to this initiative. Google’s Proxy Payments Protocol, or AP2, provides a mechanism to cryptographically verify that a user actually intends to perform a particular agent-initiated transaction. Mastercard’s Verifiable Intent framework (developed by Google to work with AP2) is a secure mechanism for users to authorize and control agent actions.
“We want to provide encrypted proof that a transaction has been approved by the user themselves, but keep it private so there is selective detection built in,” says Staffan Baric, Google’s vice president and general manager of payments. “Different players in the ecosystem – platforms, merchants, payment providers and networks – only see information that is relevant to them, but the right action is performed at the right time. Payments are a complex issue in the ecosystem.”
Parikh gives the example of a person who goes to buy a pair of sneakers but finds that they are sold out. The buyer asks the AI agent to independently purchase the sneakers if they are back in stock and cost $100 or less. The goal is to provide authentication and transparency around the deal, so if a perfect sneaker drop occurs, the consumer will end up getting the right shoe at the price they want.
Parikh points out that establishing these basic protections is key to strengthening trust in agent AI and encouraging adoption of AI-powered tools. Whether users are looking to adopt AI capabilities or not, the fact that it is so widespread means that minimal guardrails are necessary either way.