The numbers on the comparison matrix come from a structured audit. Here is the rubric, the scoring scale, and how to read the results.
Every unified API platform is scored on the same 17dimensions across five groups: agent / AI ergonomics, developer experience, documentation quality, buyer access, and coverage. Each dimension gets a 1-to-5 score with a one-line justification linked to the source page on the vendor's docs.
llms.txt or AI-friendly index
Does <docs>/llms.txt resolve? Is there an llms-full.txt? Is it complete?
1 = absent. 3 = present but partial. 5 = complete llms.txt + llms-full.txt covering every endpoint.
MCP server availability
Hosted, self-hosted, or none?
1 = none. 2 = repo without releases. 3 = self-hosted Docker. 5 = hosted endpoint.
Tool / action discovery
Can an agent enumerate every action from one URL?
1 = scrape required. 5 = single endpoint enumerates all tools.
Time-to-first-call
How many clicks from docs landing to a working curl?
1 = sales call required. 3 = signup + multi-step. 5 = curl visible on landing.
Sandbox accessibility
Self-serve sandbox without sales contact?
1 = none. 2 = sales-gated. 4 = free trial. 5 = self-serve sandbox.
OpenAPI spec quality
Is there a public direct URL to a complete OpenAPI spec?
1 = none. 3 = via Postman only. 5 = public URL, GitHub-published, drives the SDKs.
SDK breadth and quality
Official SDKs, language coverage, codegen vs hand-written.
1 = no SDK. 3 = 1-2 languages. 5 = 5+ languages, official, codegen-validated.
Interactive try-it
In-browser API explorer?
1 = none. 3 = Postman link only. 5 = embedded interactive explorer.
Webhook setup
HMAC docs, retry, replay, virtual webhooks for non-supporting vendors.
1 = polling only. 3 = native webhooks. 5 = native + virtual + replay + dead-letter.
Error response quality
Structured JSON, request IDs, custom HTTP codes for upstream errors.
1 = generic 4xx. 3 = structured JSON. 5 = structured + custom codes + metadata.
Search quality
Does docs search return useful results for "create an invoice"?
1 = broken. 3 = adequate. 5 = excellent semantic search.
Code examples per endpoint
Multi-language, idiomatic, present everywhere?
1 = none. 3 = curl only. 5 = multi-language, idiomatic, per endpoint.
Changelog visibility
Public, dated, breaking-change tagged?
1 = absent. 3 = present. 5 = dated + categorized + breaking-change tagged.
Migration / version handling
Explicit API versions, migration guides for breaking changes.
1 = none. 3 = versions exist. 5 = URL/header versioning + migration guides.
Free production access
Can a developer hit production endpoints without a sales call? A free production tier shortens time-to-first-call from weeks to minutes.
1 = no free option (quote-only). 2 = limited sandbox or short trial. 3 = trial with production access. 5 = perpetually-free production tier with real connections.
Connector breadth
How many integrations are pre-built and supported? Counted from the vendor’s own integrations directory, not marketing claims.
1 = under 30. 2 = 30–75. 3 = 75–150. 4 = 150–250. 5 = 250+ pre-built, normalized connectors.
Connector depth
For each connector, how complete is the unified data model? CRUD parity, resource breadth, and write coverage matter more than raw count.
1 = passthrough only, no normalized models. 2 = thin normalization with frequent gaps or read-mostly. 3 = moderate CRUD across most connectors. 4 = full CRUD on most connectors with deep resource coverage. 5 = full CRUD with write parity and deep resources across every connector.
Every cell below is one of the 17dimensions scored 1-5 against the linked vendor's public docs. Hover over a column header for the full dimension name.
| Dimension | Apideck | StackOne | Unified.to | Merge | Kombo | Paragon | Maesn | Codat | Chift | Rutter | Membrane |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Agent /llms.txt / AI index | 5 | 5 | 5 | 4 | 5 | 5 | 5 | 1 | 3 | 1 | 5 |
| Agent /MCP server | 5 | 5 | 5 | 4 | 1 | 5 | 4 | 1 | 3 | 1 | 5 |
| Agent /Tool discovery | 5 | 5 | 4 | 4 | 3 | 5 | 4 | 2 | 4 | 2 | 3 |
| Developer experienceTime-to-first-call | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 4 |
| Developer experienceSandbox | 4 | 4 | 4 | 5 | 4 | 4 | 4 | 4 | 3 | 4 | 4 |
| Developer experienceOpenAPI quality | 5 | 5 | 5 | 4 | 4 | 3 | 4 | 5 | 4 | 4 | 2 |
| Developer experienceSDK breadth | 5 | 4 | 5 | 5 | 3 | 3 | 2 | 4 | 2 | 2 | 3 |
| Developer experienceInteractive try-it | 4 | 3 | 2 | 2 | 2 | 3 | 2 | 3 | 3 | 3 | 1 |
| Developer experienceWebhook setup | 3 | 4 | 4 | 3 | 3 | 3 | 3 | 3 | 3 | 4 | 3 |
| Developer experienceError responses | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 5 | 1 |
| Documentation qualityDocs search | 4 | 3 | 3 | 3 | 3 | 3 | 2 | 4 | 3 | 3 | 2 |
| Documentation qualityCode examples | 4 | 2 | 4 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| Documentation qualityChangelog | 4 | 4 | 5 | 4 | 5 | 2 | 2 | 3 | 3 | 3 | 1 |
| Documentation qualityVersioning | 3 | 2 | 3 | 3 | 4 | 2 | 3 | 3 | 2 | 4 | 1 |
| Buyer accessFree production access | 3 | 5 | 2 | 5 | 3 | 1 | 2 | 1 | 1 | 2 | 5 |
| CoverageConnector breadth | 4 | 5 | 5 | 4 | 5 | 3 | 2 | 2 | 3 | 2 | 3 |
| CoverageConnector depth | 4 | 4 | 2 | 4 | 4 | 3 | 5 | 5 | 4 | 4 | 2 |
| Average | 4 | 3.9 | 3.8 | 3.7 | 3.4 | 3.2 | 3.1 | 2.9 | 2.9 | 2.9 | 2.8 |
Each platform receives an unweighted average of its 17 dimension scores, rounded to one decimal. Unweighted because every dimension matters to a different buyer profile. Weights would imply a single ideal buyer; the matrix is meant to support multiple.
Vendors ship new docs portals constantly. The published date on the matrix is when we ran the most recent audit. Anything older than 90 days should be re-audited before making a procurement decision.
Found something we missed or got wrong? Email hello@apideck.com with the source URL and the dimension. We will rerun the audit on that platform and update the matrix.