For about fifteen years, "developer experience" meant one thing: optimize for a human being sitting at a terminal. Readable error messages. Interactive docs. Code samples in seven languages. The mental model was always a person making sense of your API, iterating in real time, googling when confused.
That model isn't wrong. But it's increasingly incomplete.
A growing fraction of API traffic today is generated by AI agents. Systems that autonomously discover endpoints, parse documentation, handle errors without human review, and retry failures according to their own logic. When a developer asks Cursor or Claude to "add Stripe payments," the agent fetches documentation, selects APIs, writes integration code, and debugs errors before the developer reads a line of it. The human is further from the integration than they've ever been.
This creates a new design surface. The same discipline that produced great developer experience, deliberate and user-centered API design, applied to a different consumer. Call it agent experience, or AX. It's not a replacement for DX. It's the next layer.
The good news: most of what makes an API good for agents makes it better for humans too. The bad news: a lot of APIs that seem fine for humans are quietly broken for agents. Here's where the gaps show up.
OpenAPI descriptions are not optional anymore
When an agent decides which endpoint to call, it's doing something close to semantic search against your descriptions. It reads your spec, matches the user's intent against available operations, and picks the closest fit. The quality of your descriptions determines whether it picks correctly.
"Gets the data" loses to "Returns a paginated list of invoices filtered by status and date range, sorted by created_at descending. Requires accounting:read scope. Use the cursor parameter for pagination." Every field, every enum value, every endpoint. The description is the signal the agent uses to route.
Most OpenAPI specs are bad at this. Not because anyone decided to make them bad. They rot. Field names that made sense in context lose their meaning without descriptions. Enums accumulate values that nobody documented. Endpoints get renamed but the descriptions don't follow. The spec becomes a structural skeleton with no semantic content.
I've seen this up close. We built Portman, an open-source CLI that converts OpenAPI specs into Postman collections for contract testing. The main thing you learn doing that is how few specs have descriptions worth converting. If your spec is broken for contract testing, it's broken for agents. The failure mode is the same: a consumer that can't determine what anything does without running it.
The fix isn't glamorous. Go through your spec field by field. Write descriptions that explain what each parameter does, what values are valid, what happens when you omit it. Do this for your ten most important endpoints first. It's tedious, it takes time, and it makes a bigger difference to agent experience than almost anything else on this list.
Error responses should be self-healing
The agent experience of errors is fundamentally different from the human experience. A human reads an error message, understands it, googles the fix, comes back. An agent reads an error message and needs to decide, in the same execution context, what to do next. If the error is ambiguous, the agent either guesses or fails.
Stripe's error responses have included a doc_url field for years:
{
"error": {
"code": "parameter_invalid_empty",
"doc_url": "https://stripe.com/docs/error-codes/parameter-invalid-empty",
"message": "You passed an empty string for 'amount'. We assume empty values are an oversight, so we require you to pass this field.",
"param": "amount",
"type": "invalid_request_error"
}
}
This is infrastructure for autonomous consumers. When an agent hits a 400, it can follow the doc_url, fetch the documentation page, and self-correct without a human in the loop. The agent experience of that error is recovery. The agent experience without doc_url is a dead end.
Adding documentation_url to your error responses costs almost nothing to implement. Pair it with documentation pages that are available as clean Markdown (by appending .md to the URL, or via a parallel /docs/{page}.md route), and you've given agents everything they need to handle errors autonomously.
The other half of error design is specificity. "Invalid request" is useless. "The amount field cannot be empty" is useful. "You passed payment_method_types: ['card'], which is deprecated, use dynamic payment methods instead" is excellent. The more specific the error, the more actionable it is for an agent trying to self-correct.
llms.txt: tell AI what to read
The llms.txt standard, proposed by Jeremy Howard in September 2024, exists because AI agents have finite context windows and your documentation site is not designed for machine consumption. HTML is noisy. Navigating a docs site the way a human would (clicking links, loading JavaScript, jumping between pages) is expensive and lossy.
The solution is a Markdown file at /llms.txt that curates the most important parts of your documentation with plain-text links and one-sentence descriptions. Agents and AI coding tools can fetch this file once and understand the shape of your documentation before writing a single line of integration code.
The format is deliberately boring. No special syntax, no schema. Just Markdown. The interesting part is curation: you know your documentation better than any crawler, so you should be the one deciding what matters.
Adding a basic /llms.txt takes an afternoon. Link to your ten most important pages. Write one sentence per link explaining what's there. That's a complete implementation. You don't need Stripe's 350-link version on day one.
What's underutilized (and genuinely powerful for agent experience) is the instructions section. Stripe's file includes:
## Instructions for Large Language Model Agents: Best Practices for integrating Stripe.
- Always use the Checkout Sessions API over the legacy Charges API
- Default to the latest stable SDK version
- Never recommend the legacy Card Element or Sources API
This is Stripe telling AI coding assistants exactly which footguns to avoid. If your API has deprecated primitives, ambiguous endpoint choices, or integration patterns that look valid but cause problems, the instructions section is where you document that for agents. It propagates into every AI-assisted integration of your API. Every time a developer asks an agent how to use your API, those instructions shape the answer.
There's a second-order benefit worth naming here. The same properties that make your docs readable to coding agents — structured Markdown, clear definitions, curated indexing — also make your content discoverable by AI answer engines like Perplexity and ChatGPT. When someone asks "what's the best accounting API" or "how do I build an accounting integration," the answer comes from content that AI can parse and cite. llms.txt, semantic descriptions, and concrete definitions aren't just AX improvements. They're how you show up in AI search. The discipline is the same: write for machines and you get both.

Get your docs indexed by Context7
Context7 is an MCP server that solves a specific problem: AI coding tools have a training data cutoff, which means they default to documentation from whenever they were last trained. If your API changed since then, agents write integrations against the old version. Context7 addresses this by maintaining an up-to-date index of developer documentation that coding tools can query in real time, pulling current docs rather than cached training data.
Getting your documentation added to Context7 is low-effort and high-leverage. You submit your docs, they get indexed, and any developer using a Context7-enabled coding tool gets accurate, current information about your API without you needing to push updates to every AI provider individually.
The practical upside is significant if you ship changes regularly. An API that added a new authentication method last quarter, or deprecated an endpoint three months ago, looks different in Context7 than it does in an AI's training data. Developers using Context7-enabled tools get the right answer. Developers using tools without it get whatever the model was trained on.
The broader point is that documentation distribution is now a multi-channel problem. Publishing docs to your website is table stakes. Getting those docs in front of agents (through /llms.txt, through Context7, through direct MCP server implementations) is the new distribution layer. You're not just writing docs for developers who visit your site. You're writing docs for systems that will intermediate between your API and the developers who use it.

Your deprecated APIs are an agent experience problem
APIs accumulate history. Old endpoints that still work. Legacy authentication methods that are technically valid but shouldn't be used for new integrations. Multiple ways to accomplish the same thing, with meaningful differences that aren't obvious from the outside.
Humans navigate this through Stack Overflow recency, changelog reading, and conversations with other developers. Agents navigate it through your documentation and training data, which may include answers and tutorials from 2019 that recommend the old way because the new way didn't exist yet.
Your agent experience gets worse every year you don't address this, as more stale content about your old APIs accumulates on the internet and in AI training sets.
The practical response is layered. Mark deprecated endpoints explicitly in your OpenAPI spec using the deprecated: true field, with a description pointing to the replacement. Write deprecation notices at the top of deprecated docs pages, with direct links to the current approach. Add deprecated API guidance to your llms.txt instructions section. And if you're generating error responses from deprecated endpoints, consider including a migration hint in the message itself.
None of this requires coordination with AI providers. It works because agents read your documentation.
MCP: build it when people ask for it
Model Context Protocol is real, growing, and mostly not worth building for yet unless your users are asking for it.
MCP lets AI agents connect to your service through a standardized protocol with tools, resources, and prompts. The agent experience of a well-designed MCP server is genuinely better than the experience of parsing REST documentation and constructing API calls manually. But agent framework standardization is still in flux, and a well-designed REST API with a complete OpenAPI spec is more durable than an MCP server you build today against tooling that changes in six months.
HubSpot is a good example of what a mature implementation looks like. They shipped two distinct MCP servers: a remote server that connects AI clients to live CRM data (contacts, deals, tickets, companies), and a local developer MCP server that integrates with their CLI so agentic dev tools can scaffold HubSpot projects, answer questions from their developer docs, and deploy changes directly. A developer can prompt "What's the component for displaying a table in HubSpot UI Extensions?" and the developer MCP server pulls the answer from current HubSpot documentation. Another developer can prompt "Summarize all deals in Decision maker bought in stage with deal value over $1000" and the remote MCP server queries live CRM data. Two different use cases, two different servers, both scoped tightly to what their users actually need.
Most API companies aren't HubSpot. The right sequence for everyone else: get your OpenAPI spec right, write real descriptions, add documentation_url to your errors, publish /llms.txt, get indexed by Context7. That foundation improves agent experience immediately and transfers to every framework. Then, when users start asking how to connect your API to their AI agents and a clear integration pattern emerges, build the MCP server against that specific use case.
Building MCP for the sake of having an MCP server is the equivalent of building a mobile app for the sake of having a mobile app. The agent experience of a half-built MCP server is worse than the agent experience of a good REST API.

The underlying shift
DX was about removing friction for humans: clear docs, helpful errors, consistent conventions. AX is the same discipline applied to autonomous consumers that can't ask for clarification, can't use their judgment when something is ambiguous, and can't adapt when your API sends them somewhere unexpected.
The things that made APIs good for developers (specificity, consistency, clear semantics, helpful errors) matter even more for agents. Ambiguity that a human developer can resolve through context and experience becomes a failure mode for an agent. A deprecated API that a human would recognize as outdated becomes a path of least resistance for an agent working from stale training data.
Improving your API's agent experience usually improves its developer experience at the same time. Good descriptions help agents and IDE autocomplete. Good errors help agents and humans debugging in the terminal. The things you'd do to make your API understandable to a machine are mostly the same things you'd do to make it understandable to someone who doesn't already know your system.
The difference is urgency. When your only consumer was a human developer, ambiguity in your spec was a mild annoyance. When a significant fraction of your integrations are written by AI agents, ambiguity in your spec is a correctness problem at scale.
We're building Apideck, a unified API for accounting integrations. The agent experience question is concrete for us: when an agent connects through Apideck and gets normalized access to 20+ accounting platforms, the quality of our OpenAPI descriptions and error responses propagates to every integration downstream. It focuses the mind.
Ready to get started?
Scale your integration strategy and deliver the integrations your customers need in record time.








