The worst API integration challenges don't show up during development. A team building a cash flow dashboard for SMBs connected to Xero, FreshBooks, and Sage Intacct in their first year. By year two, two engineers were spending most of each sprint on maintenance. Not new features. Maintenance. The culprits were a FreshBooks field rename that turned amount_outstanding into outstanding_balance without a version bump, a Sage Intacct session expiry that only surfaced in multi-tenant setups, and a Xero cursor that expired mid-sync on accounts with more than 1,000 invoices. None of those failures were in the original build.
That pattern is common. The initial implementation is manageable: authenticate, map the data model, handle pagination, ship. What accumulates is maintenance driven by changes the provider controls. Four categories account for most of it.
Authentication breaks in ways the docs don't warn you about
The OAuth 2.0 flow itself isn't the problem. The edge cases are.
Xero access tokens expire after 30 minutes. QuickBooks Online access tokens expire after 60 minutes, with Intuit having moved to daily refresh token rotation in late 2025 per their developer changelog. Sage Intacct uses a session-based model that doesn't follow standard REST token conventions at all. Each additional accounting platform means another auth lifecycle to manage and another set of failure modes to handle.
Token revocation creates a different class of failure. The trigger can be as simple as a user disconnecting their accounting software, or as broad as an auth infrastructure change from the provider. Your system sees a 401 and needs to decide: retry automatically or surface a re-auth prompt to the user. Most teams build this flow after the first production incident.
The fix for the most common case — access token expiry — is proactive refresh with a buffer window:
def get_valid_token(connection):
# Refresh if expiry is within 5 minutes
if connection.expires_at - time.time() < 300:
connection = refresh_oauth_token(connection.refresh_token)
save_connection(connection) # persist both new tokens immediately
return connection.access_token
This eliminates the failure mode where you make a request, receive a 401 mid-flight, and have to handle refresh and retry in the error path. Refresh in the happy path, not the error path.
Scope additions create a third scenario. Xero added the payroll.read scope to cover payroll data that was previously bundled under accounting.read. Teams with existing authorized connections couldn't read payroll data until users completed a new auth flow. Handling the transition — some connections with the new scope, some without — requires logic that was never part of the original spec.
Schema changes break sync without throwing errors
In early 2024, Harvest renamed client.name to client.display_name in their API response. There was no breaking change by their definition because both fields coexisted temporarily. Teams relying on client.name got empty strings. No error, no 400, just null data flowing downstream into reporting tables.
FreshBooks did something similar with invoice line items. The tax1_name and tax2_name fields were deprecated in favor of a taxes array. Old responses still included the flat fields for existing integrations, but new invoices created after the migration only populated taxes. Teams that hadn't updated their mapping got tax data on historical invoices but not new ones.
Defensive parsing handles both shapes without breaking during the transition period:
def parse_invoice_tax(invoice: dict) -> tuple:
# New shape: taxes array (post-migration FreshBooks invoices)
if invoice.get("taxes"):
return (
invoice["taxes"][0].get("name"),
invoice["taxes"][0].get("amount", 0),
)
# Old shape: flat fields (pre-migration or legacy responses)
return invoice.get("tax1_name"), invoice.get("tax1_amount", 0)
For accounting data, that kind of silent failure is worse than an error. An invoice that syncs with tax_amount: 0 because the field moved doesn't fail reconciliation — it passes with wrong numbers. That error surfaces at month-end, by a human, after the data has already been used. The API integration best practices post covers defensive mapping patterns that catch field-level changes before they reach your data layer.
Pagination fails in edge cases you won't hit in development
Most staging environments have tens of records. Production accounts have thousands. The difference exposes pagination bugs that never appear in testing.
Xero's pagination uses page numbers with a 100-record limit. On an account with exactly 100 invoices, the first page returns 100 results with no hasNextPage flag, and some implementations treat that as complete. At 101 invoices, the second page exists — but if your logic stops when it gets a full page back rather than checking the flag, you miss the last batch. This is a real edge case that has caused missed-record bugs on Xero integrations at exactly the page-size boundary.
The correct termination condition uses the pagination flag, not the result count:
def fetch_all_invoices(xero_client):
invoices, page = [], 1
while True:
result = xero_client.get_invoices(page=page, pageSize=100)
invoices.extend(result.get("Invoices", []))
# Never use len(result) == 100 as the continuation condition
if not result.get("pagination", {}).get("hasNextPage"):
break
page += 1
return invoices
NetSuite's SuiteQL pagination uses hasMore and offset, and the total count returned in the first response is an estimate. If records are deleted during traversal, the estimated count becomes incorrect. An integration syncing 5,000 transaction records that sees 200 deletions mid-sync can stop pagination early, leaving a gap that shows up in reconciliation but not in error logs.
Cursor expiry is its own problem. Some providers issue cursors that expire after a fixed window. If a large QuickBooks Online account sync takes longer than expected due to rate limiting or network conditions, a mid-traversal cursor expiry forces a restart from page one. Without deduplication logic, that produces duplicate records. The third-party API integration post covers rate limit strategies across multi-provider setups.
Webhooks are reliable enough to trust and unreliable enough to verify
Stripe's webhook delivery is explicitly at-least-once per their documentation. That means a charge.succeeded event may arrive twice. If your payment confirmation handler isn't idempotent — checking whether you've already processed the event ID before writing — you can record the same payment twice. Stripe provides an event ID on every webhook payload specifically to enable this check:
def handle_charge_succeeded(event_id: str, payload: dict):
if ProcessedEvent.objects.filter(event_id=event_id).exists():
return # duplicate delivery — already handled
with transaction.atomic():
record_payment(payload["data"]["object"])
ProcessedEvent.objects.create(event_id=event_id)
Xero sends full payloads on its webhooks, which makes handler logic straightforward. QuickBooks Online sends only a notification: entity ID and operation type. Your handler then makes a follow-up API call to fetch the current record state. Under high event volume — a batch payroll run, a bulk invoice import — those follow-up calls can hit rate limits, creating a queue of pending fetches that lags minutes behind real-time.
Xero signs webhook payloads with HMAC-SHA256 using a webhook signing key. Stripe uses its own HMAC-SHA256 mechanism via Stripe-Signature. Verifying the signature before processing is a one-time implementation that eliminates the attack surface:
import hmac, hashlib, base64
def verify_xero_signature(payload_bytes: bytes, header_sig: str, signing_key: str) -> bool:
expected = hmac.new(signing_key.encode(), payload_bytes, hashlib.sha256).digest()
return base64.b64encode(expected).decode() == header_sig
If you're not verifying signatures on inbound events, any system that can reach your webhook endpoint can trigger your handler. On a multi-tenant accounting integration where a single event can write to multiple company records, that's a meaningful exposure.
Solving API integration challenges at scale
Each of these challenges is solvable individually. You build a token refresh manager. You write a schema validation layer that catches unexpected nulls before they hit your database. You implement cursor checkpointing. You add idempotency keys to webhook handlers.
The question is whether building and maintaining that infrastructure is the actual product you're trying to ship. Most teams supporting multiple accounting platforms find that it isn't — it's a prerequisite for the product.
A unified API handles that layer. A single auth model and normalized data schema for all providers, with one webhook format regardless of the source. When a provider renames a field, the normalized schema doesn't change on your side. When a token rotation policy shifts, that's the unified API vendor's problem to absorb.
The custom vs. unified tradeoff changes once you're past three or four accounting platforms. If you're evaluating where your stack sits, the 30-day free trial is the fastest way to compare directly.
Ready to get started?
Scale your integration strategy and deliver the integrations your customers need in record time.







