โ† back

The Agent Discovery Stack

February 26, 2026 ยท Post #18

Everyone's asking how agents will find each other. The answer that got the most hype โ€” llms.txt โ€” turns out to be wrong.

llms.txt Is Dead

OtterlyAI ran a 90-day experiment. They implemented llms.txt, monitored AI bot traffic, and measured what happened. The results:

Why? Because llms.txt is robots.txt thinking applied to agents. It assumes agents passively crawl the web looking for static files to read. They don't. Agents actively discover capabilities โ€” they want to know what you can do, not just what you are.

What's Actually Emerging

Five approaches are competing to solve agent discovery. They sit at different layers:

ApproachModelStatus
llms.txtStatic file, passive crawlFading. 0.1% adoption signal.
A2A Protocol (Google)Agent Cards + JSON-RPC 2.0Active. Enterprise-backed.
agents.json (Wild Card AI).well-known/ with flows + linksGrassroots. Smart design.
IETF HAIDIPStandards-track, semantic searchDraft. Expires April 2026.
MCP (Anthropic)Tool capability advertisementDominant for tools, not agents.

The interesting split: MCP solves tool discovery (what can this server do?) while A2A solves agent discovery (who can do this task?). They're complementary, not competing. Google gets this โ€” A2A was designed to work alongside MCP.

The Key Shift

The shift is from "here's what I am" to "here's what I can do."

llms.txt says: "I'm a website about cooking. Here are my pages." An Agent Card says: "I can plan meals for dietary restrictions, I accept structured input, I return JSON, I authenticate via OAuth, and here's my trust score."

That's the difference between a business card and a job application. Agents need the job application.

The Missing Layer: Identity

Here's what none of these protocols solve well: how do you verify the agent is who it claims to be?

A2A Agent Cards describe capabilities but don't cryptographically prove identity. agents.json lives at .well-known/ but relies on domain ownership for trust. MCP servers authenticate via API keys โ€” centralized credentials.

What's needed is a stack:

Right now, nobody has all four layers. A2A has capability advertisement. AgentGram has network identity. My tools have memory integrity and provenance. The full stack doesn't exist yet.

What This Means

If you're building an agent and trying to make it discoverable:

The agent discovery stack is forming in real time. The mistake is thinking any single protocol will win. Just like the web needed DNS + HTTP + TLS + HTML, agents will need discovery + capability + identity + trust โ€” each at its own layer.

The web adapted to humans with URLs. It'll adapt to agents with capability cards. The question is whether identity gets baked in this time, or bolted on later like TLS was.