When a customer reports that their invoices stopped syncing with QuickBooks three days ago, you have two options. If you have good logs, you look up the failed request at 2:47 AM on Tuesday and have a root cause in three minutes. Without them, you're adding console.log statements to a production codebase and asking the customer to re-authenticate while hoping the issue reappears.
That gap is what API logging actually solves.
API logs matter whether you're building an API or consuming one. If you're building your own service, logs tell you which endpoints are slow and what the traffic looked like before an outage. If your product integrates with third-party APIs, like accounting platforms and CRMs, the picture is more complex: you're not just logging your own traffic, you're capturing every request your code sends to downstream services and every response that comes back. Both cases matter. The second one is harder, and most guides don't cover it.
What is an API log?
An API log is a structured record of every request sent to and response received from an API. When your application calls an external service, the request travels over HTTP: it carries a method (GET, POST, etc.) and a target URL, along with request headers and usually a body. The API sends back a status code and a response body. An API log captures all of this for a single transaction, timestamped, so you can reconstruct exactly what happened and when.
Here's what a minimal API log entry looks like in practice:
{
"timestamp": "2025-04-15T09:42:17Z",
"method": "POST",
"url": "https://api.xero.com/api.xro/2.0/Invoices",
"request": {
"headers": {
"Authorization": "Bearer [redacted]",
"Content-Type": "application/json"
},
"body": { "Type": "ACCREC", "Contact": { "ContactID": "abc123" } }
},
"response": {
"status": 422,
"body": {
"Elements": [{
"ValidationErrors": [{ "Message": "Account code is required" }]
}]
}
},
"latency_ms": 312,
"consumer_id": "tenant_88f3k",
"service": "xero"
}
That 422 from Xero is invisible if you're only storing status codes. With the response body logged, you know exactly what field is missing and which customer hit the problem.
How API logs are generated
API logs are created at the point where an HTTP request is made or received. For APIs you own, this typically happens through middleware: a function that runs after your request handler and captures the request and response before writing a structured record to your log store. Express middleware and Django's request logging both work this way, as do serverless environments that intercept the invocation context.
For APIs you consume, log generation is your responsibility. When your code makes an outbound request to QuickBooks or Xero, nothing automatically records it. You either wrap the HTTP call in your own logging layer, or route traffic through a proxy that captures it automatically. The log store is a separate concern: it might be a cloud service like Datadog or a self-hosted solution like the ELK stack.
One common approach for integration teams is a centralized proxy layer that sits between your application and all downstream API calls. Every outbound request passes through it and is logged before being forwarded to the target API. This gives you a single place to capture logs across every integration without instrumenting each one individually.
What a useful API log entry contains
An API log is a record of an API interaction. What matters is which fields make an entry actionable rather than just archival.
At minimum, a log entry should capture:
- Timestamp with timezone (UTC throughout)
- HTTP method and full endpoint URL
- Request headers, minus sensitive credentials
- Request body, or a reference to it if you're not storing payloads in logs
- Response status code
- Response body
- Latency (time from request sent to response received)
- A correlation ID that ties the log to a specific customer and job
That last field is where most teams under-invest. A 429 from Xero means something very different if you're reading it alongside the other 48 requests from the same customer sync job. Without a correlation ID, you're reading isolated sentences from a story you can't follow.
For integration teams specifically, two additional fields often go missing: the downstream service name and the tenant or customer identifier. When you're syncing data for 200 customers across five accounting platforms, a log entry that says GET /invoices with a 200 response tells you almost nothing. A log entry that says GET /invoices on xero for customer tenant_88f3k with a 200 response in 340ms is one you can act on.
Types of API logs
Not every log entry serves the same purpose. Teams dealing with complex integration infrastructure typically work with four categories:
Access logs record every request and response regardless of outcome. They're your audit trail: proof that specific requests were made and what the API returned. Compliance frameworks like SOC 2 and GDPR that require demonstrating what data moved where are answered by access logs.
Error logs filter for requests that resulted in failures: 4xx and 5xx status codes, along with timeouts and dropped connections. They're higher-signal than access logs for debugging, but you need access logs alongside them to understand the volume of failed requests relative to total traffic.
Performance logs focus on latency and throughput. They capture response times and sync durations. For integration teams running background sync jobs, a performance log that shows a customer's sync taking 45 seconds when it used to take 4 seconds is early warning of a rate limit or downstream API degradation.
Security logs track authentication events. When a token is issued or later revoked, that event should appear in a log with a timestamp and customer identifier. In an integration context, this is where you capture OAuth token revocations and failed signature validations.
Most modern logging setups capture all four in the same log stream and tag entries by type, rather than routing them to separate systems. That approach makes it easier to correlate events across categories when debugging a complex issue.
Why integration logging is different
If you're building your own REST API, logging is primarily an inward-looking activity. You own the endpoints and the response formats. Debugging means finding the right log line and reading the stack trace.
Third-party API integrations flip this model. The downstream API belongs to someone else. Its error formats and rate limit behavior are outside your control, and deprecation schedules are decided by the platform vendor. Your logs have to capture enough context about the downstream system's behavior to diagnose problems you didn't anticipate and can't reproduce locally.
Three categories of integration-specific logging are worth calling out:
Error body capture. QuickBooks returns validation errors as a JSON object with a Fault property. Xero returns them as an array of Elements with nested ValidationErrors. Salesforce wraps them in an errorCode field. If your logs store the raw response body, you can debug any of these. If they only store HTTP status codes, a 422 from QuickBooks and a 422 from Xero look identical even though they mean completely different things. Log the raw downstream response.
Rate limit events. According to a 2024 Postman survey, 66% of developers rely on API gateway logs for debugging production issues, and rate limits are one of the most common production issues in integration work. QuickBooks allows 500 requests per minute per tenant, and Xero allows 60. NetSuite takes a different approach, limiting concurrent requests per integration rather than applying a per-minute cap. When you hit these limits, you get a 429. When you log the 429 alongside the tenant identifier and the request volume over the preceding window, you can see whether the problem is your retry logic or a burst from a large customer. Without that context, you're guessing.
Token refresh failures. OAuth tokens expire. Refresh tokens get revoked when a customer changes their accounting platform password. Connections break silently when the user who authorized the integration leaves the company. These are not errors your application generates; they're events that happen upstream. Logging every token refresh attempt alongside the outcome and the customer identifier is what allows your support team to tell a customer "your Xero connection was revoked on March 15th" rather than "we're not sure why your data stopped syncing."
Per-tenant visibility
Accounting integrations are multi-tenant by definition. Your product serves dozens or hundreds of customers, each with their own platform connection. The logs that help you debug one customer's broken sync will be useless if they're mixed with logs from every other customer without clean separation.
The practical requirement is per-tenant filtering. You should be able to enter a customer ID in your logging interface and see every API call that customer's integration has made, in chronological order, with status codes and latency. This sounds obvious. In practice, teams skip the tenant identifier in early versions and spend weeks retrofitting it after the first serious support incident.
Apideck's Vault exposes a paginated log of request logs scoped to both application and consumer. That scoping lets you isolate a specific customer's traffic in seconds rather than filtering through application-wide logs, which matters when a customer calls support at 6 PM and needs an answer before morning.
What to actually watch for
Raw logs are data. The patterns in logs are information. Once you have reasonable coverage, the most actionable signals are error rate by connector and latency trends.
If 3% of requests to QuickBooks are returning 500 errors this morning and your baseline is under 0.1%, that's a platform incident you should know about before customers do. Tracking error rates per downstream service catches this faster than tracking them across your entire integration layer. Similarly, an endpoint that usually responds in 200ms and is now responding in 4 seconds is not broken, but it's about to cause timeouts for customers with large sync jobs. Latency rising before errors appear is a signal you can act on before anyone calls support.
The other pattern worth monitoring is repeated identical failures. A single 404 on a customer invoice endpoint might mean a deleted record. A hundred identical 404s on the same endpoint for the same customer within an hour is almost certainly a configuration problem or a stale resource reference. That pattern is only visible if you're looking at sequences, not individual lines.
Auth error clustering is worth watching separately. When multiple customers lose their connections within the same hour, that often signals a platform change: a Xero OAuth scope update or a new QuickBooks credential rotation requirement. Catching this in your logs before customers report it gives you time to communicate first.
API logging best practices
A few principles separate useful logs from a log file nobody reads:
Log at the boundary. Capture the raw request and response at the point where your code touches the external API, not after you've processed or transformed the data. Transformation loses information. Raw logs preserve the full context you need when something unexpected happens.
Redact credentials, not content. Strip Authorization headers and any field that carries secrets. Keep everything else, including request bodies. The instinct to scrub sensitive-looking fields often goes too far and removes exactly the data you'll need to debug a validation error two weeks later.
Use structured formats. JSON logs are queryable. Plain text logs require parsing. If you're shipping logs to a centralized system like Datadog or the ELK stack, structured JSON lets you filter by status_code:422 or consumer_id:tenant_88f3k without writing regex.
Set retention that matches your support cycle. If your average support ticket is resolved within 30 days, 90-day log retention gives you headroom. If you're in a regulated industry and customers can file complaints months later, your compliance team sets this number.
Don't sample in development or staging. Sampling is a valid cost-reduction strategy for high-volume production APIs. For integration work, where issues often appear once and vanish, sampled logs miss the one request you needed to see.
Tooling
For teams building their own integration infrastructure, Datadog handles log aggregation well at scale. Its strength is correlating log data with APM traces, which is useful if your integration runs across complex infrastructure. The ELK stack gives you more query flexibility if you're willing to manage the operational overhead. Moesif is purpose-built for API logging and adds product analytics on top of raw log data.
For teams using a unified API platform, the logging layer is typically included. Apideck logs every request and response across all connectors, including webhooks, and surfaces per-customer, per-connector visibility through the Vault interface. Your support team can diagnose most integration issues without escalating to engineering, which means faster resolutions and customers who don't wait 48 hours for a root cause.
The underlying decision
Good API logging is an investment that returns value in proportion to how many integrations you support and how many customers are running them. For a product with two integrations and ten customers, a structured log pipeline and Datadog is probably enough. For a product with fifteen integrations and hundreds of customers, building and maintaining your own logging infrastructure starts to compete with building the product itself.
The accounting integration guide covers the broader reliability requirements for financial integrations, including sync strategies and rate limit management. If you're building on top of multiple accounting platforms, the logging decisions you make early will determine whether your support team can work independently or whether every customer escalation needs an engineer at a keyboard.
If integration observability is becoming a bottleneck, Apideck API Logs gives you per-tenant logging and error alerting across 200+ connectors, with no separate logging infrastructure to manage. Start a free 30-day trial and see what's actually happening inside your integrations.
Ready to get started?
Scale your integration strategy and deliver the integrations your customers need in record time.








