Back to Blog
Trust & Verification

The Trust Gap

Why the digital trust infrastructure built for humans is failing in an agentic world - and what businesses must build instead.

March 13, 202614 min read
The core challenge
Every trust signal we built for the internet was designed for a human being sitting at the other end. That assumption is now breaking down.

Think about what it actually means to trust a brand online. You recognise the logo. You check the domain. You read the reviews. You look at the number of followers, the quality of the website, the SSL padlock in the address bar. You might even look up who owns the business. These are all signals designed to answer one question: is this what it says it is?

Every one of those signals was designed for a human being to interpret. A human eye reads the domain. A human brain registers the brand. A human decision is made.

That model is not disappearing. But it is being supplemented - and in many contexts already being replaced - by something entirely different. AI agents are now the first point of contact in a growing range of commercial interactions. They research. They compare. They shortlist. In some cases they transact, completely autonomously, before a human has even been informed a decision was being made.

The question is no longer just whether your brand can earn the trust of a human consumer. It is whether your brand can earn the trust of the machine acting on their behalf. And the honest answer, for most businesses operating today, is: you have no idea, because you have never had to think about it.

How the Internet Learned to Trust

The history of digital trust is a story of progressively better proxies. In the early web, trust was almost entirely transferred from the offline world - you trusted Amazon because you knew the brand, or because a friend recommended it, or because a journalist wrote about it. The internet had no native trust infrastructure of its own.

Over the following two decades, the web built its own signals. Domain authority. HTTPS certificates. Verified social accounts. Star ratings and review aggregators. Payment badges. Brand safety frameworks. Google's PageRank was itself a trust mechanism - a proxy for authority based on how many other trusted sources pointed to you.

All of it was built for humans. The assumption baked into every layer of this architecture is that there is a person making a judgement call. A person who can be reassured by a familiar logo, deterred by a missing padlock, swayed by a hundred five-star reviews.

The foundational assumption
The entire trust infrastructure of the commercial internet was built on one implicit assumption: that a human being is in the loop. Remove the human, and the architecture fails.

None of this translates to a machine-to-machine context. An AI agent does not register brand familiarity. It does not respond to social proof in the way a human does. It cannot intuitively sense whether a business feels legitimate. It queries structured data, interprets verified signals, and makes decisions based on the quality and trustworthiness of machine-readable credentials.

We spent twenty years building a digital trust infrastructure for humans. We are now entering an era that requires us to build one for machines - and we are starting essentially from zero.

The Agent Does Not Care About Your Logo

This is the reality that most businesses have not yet fully absorbed. The interface through which a growing number of consumers will encounter, evaluate, and engage with your brand is not a human being. It is an AI agent - a system that operates on behalf of that consumer, processing information and making decisions at a speed and scale no human could match.

That agent has a job to do. It is looking for verified signals. It needs to know who your brand is, what it is authorised to do, whether its credentials are current and cryptographically attested, and whether it has behaved consistently and safely in prior interactions. It is, in the most literal sense, running a background check on every entity it considers engaging with.

The trust gap between human trust signals and machine-readable verification in the agentic economy.

Ribbit Capital's analysis of the emerging agentic economy describes this challenge plainly: existing machine identity tools - IP addresses, API keys - "no longer convey what we need to understand about machines' trustworthiness or purpose." New standards are required that allow agents to recognise both legitimate and malicious counterparts across every interaction.

The visibility problem
Brands without verified agentic identities will not rank poorly in the agentic economy. They simply will not appear.

The businesses that understand this early will make different decisions than those that do not. They will invest in verified agent infrastructure now, while the standards are still forming and the cost of entry is relatively low.

The Verification Void

The problem is not that businesses are careless about identity. Most large organisations take KYC - Know Your Customer - seriously. They verify who their customers are, they maintain audit trails, they comply with regulatory requirements. The problem is that the verification frameworks we have were built to answer questions about humans and human-controlled entities.

They were not built for a world in which AI agents are acting as intermediaries, or as principals in their own right. The question "who is the agent?" - where it comes from, who authorised it, what it is permitted to do, and whether its actions can be audited - has no standardised answer in today's commercial infrastructure.

The investor community is converging on this as one of the defining infrastructure challenges of the decade. Ribbit Capital has articulated it directly: "KYA - Know Your Agent - will be bigger than KYC." The logic is straightforward: as agents proliferate, the accountability framework that has historically been applied to human actors must be extended, with appropriate technical rigour, to machine ones.

What the Void Enables

The absence of verified agentic identity does not just create ambiguity. It creates active commercial and reputational risk. Consider three scenarios that are not theoretical:

Brand impersonation at scale

A malicious actor deploys an AI agent representing itself as your brand, interacting with consumers or enterprise partners at scale, before any human notices. Without cryptographic attestation, there is no reliable way for the other party to distinguish the legitimate agent from the fraudulent one.

Silent competitive loss

A consumer's AI agent, tasked with finding the best service provider, encounters your brand but cannot verify its credentials. Faced with uncertainty, the agent defaults to a competitor whose verified identity stack is legible. You lost the evaluation before a human was ever involved.

Compliance exposure

A regulatory investigation into an agentic workflow asks for a complete audit trail of every agent interaction, every authorisation, every decision. Without verified agent infrastructure, that trail does not exist. The compliance exposure is significant and retroactively very difficult to address.

Someone Saw This Coming

Most of the technology industry arrived at the idea of autonomous agents in 2023, propelled by the sudden mainstream visibility of large language models. But the core problem - how do machines establish trust with other machines, without a human in the loop - was identified years earlier by a small group of researchers in Cambridge.

In 2017, Fetch.ai was founded by Humayun Sheikh, Toby Simpson, and Bilal Hammoud. Sheikh was a founding investor in DeepMind and had spent years watching artificial intelligence evolve from a research curiosity into an operational technology. Simpson had been Head of Software Design at DeepMind itself. They shared a conviction that the coming wave of AI would not be centralized in monolithic platforms, but distributed across millions of autonomous software agents - each acting on behalf of a person, a business, or a device, and each needing to establish trust with the others.

Their central concept was the Autonomous Economic Agent: a software entity capable of performing useful economic work, transacting with other agents, and functioning as a full participant in a digital economy - not as a tool operated by a human, but as an independent actor carrying verifiable credentials and a clear chain of authorisation.

Built before the gap was visible
Fetch.ai was not founded in response to the trust gap. It was founded because the trust gap was inevitable - and the founders understood that before most of the industry had even begun to think about autonomous agents.

Building for Machine Trust

The solution is not complicated in principle, even if it requires deliberate investment in practice. Verified agentic identity means giving your brand's AI agent the equivalent of a passport - a set of cryptographically attested credentials that establish who the agent is, who it represents, what it is authorised to do, and what it has done.

What Verified Agentic Identity Requires

Cryptographically attested identity

A registered, verifiable credential for each agent - not just a system ID, but one that conveys authority, scope, and provenance.

Permission framework

A clear definition of what the agent can and cannot do, linked to a human chain of authorisation that can be inspected and audited.

Real-time credential exchange

A communication protocol that allows agents to exchange and verify credentials in real time - the machine-to-machine equivalent of showing identification before a transaction.

Human-legible trust signals

Trust signals legible not just to other machines, but to the consumers and businesses whose interests the agents represent - so that human oversight remains meaningful.

From Theory to Infrastructure: What Fetch.ai Built

Every item on that list is a description of infrastructure that already exists. Fetch.ai's Agentverse is a platform purpose-built for exactly this challenge. At its core sits the Almanac - a tamper-proof registry of verified agent identities. Think of it as the passport office of the agentic economy.

Verified identity that cannot be faked

Every registered agent receives a unique digital identity, secured by the same public-key cryptography that underpins HTTPS, secure email, and digital passports. No one can impersonate a verified agent, for the same reason no one can forge a properly issued digital signature.

Permissions that are transparent and auditable

Agents register with verified metadata that defines what they are allowed to do, who they represent, and the chain of authorisation behind them. Credentials expire and must be re-verified, ensuring permissions always reflect current authority.

Secure communication at the protocol level

Each message is digitally signed by the sending agent. The receiving agent checks the signature before acting. If a bad actor intercepts or alters a message in transit, the signature check fails and the message is rejected. Impersonation and man-in-the-middle attacks are blocked at the infrastructure level.

Discoverability built on trust, not marketing

Agents are discoverable based on verified identity, declared capabilities, and interaction history - not advertising spend or SEO tactics. The agents that surface first are the ones with the strongest trust credentials.

A commercial asset, not a compliance requirement
Verified agentic identity is not a compliance requirement waiting to be imposed from outside. It is a commercial asset that the market is already beginning to price. The businesses that build it now are building a moat. The businesses that wait are building debt.

The First-Mover Logic

There is a pattern in the history of digital infrastructure that is worth understanding. The businesses that built trusted, proprietary data and identity networks early - Stripe with payment credentials, Plaid with financial account connections, Visa with tokenised card data - did not simply benefit from being first. They built compounding advantages that became structurally difficult for competitors to displace.

Trust compounds. When a verified identity has been used in thousands of legitimate interactions, that history itself becomes a credential. The network effects of trust infrastructure mean that the businesses which establish themselves early as verified, reliable participants in the agentic ecosystem will carry those advantages forward in ways that are very difficult to replicate from a standing start.

Ribbit Capital captures the structural dynamic clearly: becoming an issuer of access tokens "typically requires years of product execution, investment in complex infrastructure, and earning a level of scale and trust that is difficult to replicate." The same logic will apply to agentic identity. The ledger of trusted interactions is not something you can buy or shortcut.

The window is closing
The time to build trust infrastructure is before you need it - not after the market has decided who is trusted and who is not.

The question facing every business operating in a sector where AI agents will mediate consumer or enterprise relationships - which is to say, effectively every business - is not whether verified agentic identity matters. It is whether to build it before or after your competitors do.

A Final Thought

The internet's trust infrastructure was built incrementally, imperfectly, and often reactively - in response to fraud, manipulation, and exploitation that exposed the gaps in whatever system existed at the time. It worked, eventually, but the cost of reactive construction was enormous: in consumer harm, regulatory intervention, and commercial disruption.

We are at the beginning of an equivalent transition for the agentic layer. The gaps are already visible. The fraud vectors are already being exploited. The regulatory direction of travel is clear, even if the specific requirements are not yet settled.

The difference this time is that the foundational infrastructure does not need to be built from scratch. The work that Fetch.ai began in 2017 - verified agent identity, secure registration, tamper-proof communication, and a discovery engine built natively for machine-to-machine trust - represents years of engineering applied to exactly the problem the market is now waking up to.

The trust gap is real. The tools to close it exist. The question is whether your business builds the bridge - or waits for someone else to charge a toll to cross it.

References

  1. Ribbit Capital Partner Letter: Tokens (2025). On machine identity standards and the inadequacy of existing authentication tools for agentic contexts.
  2. Ribbit Capital Partner Letter: Tokens (2025). On KYA (Know Your Agent) as an emerging compliance and commercial framework, and its anticipated scale relative to KYC.
  3. Ribbit Capital Partner Letter: Tokens (2025). On the structural dynamics of trust and identity token infrastructure, and the compounding advantages of early entrants.
  4. Fetch.ai was founded in 2017 in Cambridge, UK by Humayun Sheikh (founding investor in DeepMind), Toby Simpson (former Head of Software Design at DeepMind), and Bilal Hammoud.
  5. The Almanac is Fetch.ai's tamper-proof registry of verified agent identities. Agents register through digital signature verification and must periodically renew registration.

March 2026. This piece represents original analysis and perspective. Ribbit Capital references are drawn from publicly available partner communications and cited for context, not endorsement. Fetch.ai references are drawn from publicly available documentation and corporate communications.

Get Started

Close the trust gap for your brand

The infrastructure to verify your brand in the agentic economy exists today. Claim your verified agent, establish cryptographic identity, and become discoverable to every AI acting on behalf of a consumer.