Most integration teams get the happy path working in the first sprint. The rest of the work shows up later: token refresh cycles, pagination edge cases, idempotent writes, rate limit handling under sustained load. It surfaces when a customer reports a problem in production.
The sequencing makes this predictable. The behaviors that matter most for integration reliability are invisible in development and testing. Your test environment has 50 records and a stable network. Production has 50,000 records, a QuickBooks rate limit that varies by subscription tier, and a webhook endpoint that was briefly down during a deployment last Tuesday. These API integration best practices are written for developers building integrations with accounting platforms and payment processors, not generic horizontal SaaS.
Authentication best practices: build for the token lifecycle
Most integrations implement OAuth correctly for the initial authorization flow. The failure point is token management afterward.
Access tokens expire. QuickBooks Online tokens expire after 60 minutes; Xero tokens expire in 30 minutes. If your integration stores the access token without implementing refresh logic, requests start returning 401s when the token expires. The user doesn't see a clear error. They see data that stopped syncing.
The correct pattern: store both the access token and refresh token alongside the expiry timestamp. Before every API call, check whether the access token is within a few minutes of expiry. If it is, refresh proactively before making the request. This eliminates the failure mode where you make a request, get a 401, and have to handle refresh mid-flight.
def get_valid_token(connection):
if connection.expires_at - time.time() < 300: # 5-minute buffer
connection = refresh_token(connection.refresh_token)
return connection.access_token
For API key-based auth (common across many accounting integrations), the practice most teams skip is rotation planning. API keys don't expire on their own, which makes them feel lower-maintenance than OAuth. But when a key is compromised or a team member with access leaves, you want a rotation procedure that doesn't require an hour of coordination. Store API keys in environment variables or a secrets manager, never in application code or config files committed to version control.
For customer-facing integrations where you're managing credentials on behalf of users, Apideck Vault handles token storage and lifecycle across connectors without your application touching raw credentials directly. For a deeper look at the authentication methods used across different accounting APIs (OAuth, API keys, HMAC, OIDC), that guide covers each one in detail.
Pagination: handle every pattern
APIs implement pagination differently. Offset-based pagination (?page=2&limit=100) is intuitive but breaks at scale: if records are added or deleted between requests, you can miss records or process duplicates. Cursor-based pagination (where the API returns a next_cursor value you pass on the next request) is more reliable for large datasets.
QuickBooks Online and Xero both use offset-based pagination. When syncing invoices from a customer's account, you need to handle results that come in under the page size limit, APIs that don't return total record counts (you only learn there's no next page when you get an empty result), and termination conditions based on has_more rather than result count alone.
A reliable pagination loop:
def fetch_all_invoices(client):
invoices = []
offset = 0
page_size = 100
while True:
page = client.get_invoices(offset=offset, limit=page_size)
invoices.extend(page.results)
if len(page.results) < page_size or not page.has_more:
break
offset += page_size
return invoices
One pattern that breaks in production: assuming that if len(results) == page_size, there's always another page. APIs sometimes return exactly page_size results on the final page. Build your termination condition around has_more or a next_cursor field when available.
Idempotency and the idempotent API: required for financial writes
When your integration creates an invoice, syncs a payment, or posts a journal entry, network failures can leave you uncertain whether the operation succeeded. A naive retry creates duplicate records in your customer's accounting system. That's a support issue that takes real time to unwind.
Some financial APIs support idempotency keys directly. Stripe accepts an Idempotency-Key header; if you send the same key on a retry, Stripe returns the original response rather than processing the request again. QuickBooks Online doesn't have a native idempotency header, so you implement it at the application layer:
- Generate a UUID for each operation before sending the request.
- Store that UUID with the pending state in your own database.
- On success, update to completed with the remote ID returned.
- On failure, check the remote system for the record before retrying, using your internal reference ID, amount, date, and description as the lookup key.
This is more work than a simple API call, but accounting integrations have low tolerance for duplicate records. A billing platform that double-posts revenue to a customer's general ledger will hear about it.
Error handling: separate 4xx from 5xx
4xx errors are your fault or your user's fault. 5xx errors are the upstream API's fault. Retry logic should reflect this distinction.
For 4xx responses:
- 400 Bad Request: your payload is malformed. Retrying won't help. Log the full request payload and fix the code.
- 401 Unauthorized: your token is expired or invalid. Refresh the token, then retry once.
- 403 Forbidden: the token is valid but lacks the required scope. Surface this to the user as a configuration issue, not a transient error to retry.
- 404 Not Found: the record doesn't exist. If it's a lookup, handle the empty case. If it's an update, the remote record was deleted.
- 429 Too Many Requests: you've hit the rate limit. Back off using the
Retry-Afterheader value.
For 5xx errors, retry with exponential backoff plus jitter:
import random
import time
def retry_with_backoff(fn, max_retries=5):
for attempt in range(max_retries):
try:
return fn()
except ServerError as e:
if attempt == max_retries - 1:
raise
wait = (2 ** attempt) + random.uniform(0, 1)
time.sleep(wait)
The random.uniform component (jitter) prevents the thundering herd problem: without it, multiple failed requests all retry at the same interval and hit the API simultaneously.
API rate limiting: read the headers on every response
Every accounting API publishes rate limit information in response headers. QuickBooks Online returns rate limit headers when requests approach the limit; Xero returns them on every response (enforcing 60 calls per minute and 5,000 per day per organization). If your integration ignores these headers, you'll hit the limit hard and get blocked rather than pacing requests naturally.
A rate-limit-aware client reads the remaining count on each response. When remaining drops below a threshold, it slows down proactively rather than waiting for a 429. When it receives a 429, it reads Retry-After and waits exactly that long before retrying.
QuickBooks Online's per-minute limits vary by subscription tier. A Simple Start account has a lower limit than a Plus account. If you assume a fixed limit and run the same request cadence against all customers, you'll hit limits on lower-tier accounts without understanding why. Build handling that adapts to the headers returned, not to a hardcoded assumption.
For large initial syncs, queue write operations rather than firing them in parallel. Pulling three months of transactions for a new connection involves a lot of requests across invoices and payments. A queue with a controlled dispatch rate is more reliable than parallel batches competing for the same rate limit budget.
Schema drift: subscribe to changelogs and parse defensively
APIs change. Accounting APIs change often. Regulatory requirements and feature launches create constant pressure to modify data models.
Scope requirements for QuickBooks Online have evolved as Intuit has added features and deprecated legacy access patterns. Xero introduced granular scopes for apps created from March 2026 onward, changing how permissions are requested and authorized. Integrations that don't account for these changes break for new user authorizations while continuing to work for existing ones, which makes them particularly hard to diagnose.
Two practices protect against most drift. Subscribe to the API changelog: QuickBooks Online Developer and Xero Developer both publish release notes and notification feeds. Parse defensively: don't fail if a response contains unexpected fields. A JSON parser configured to error on unknown properties will break the first time a provider adds a new field to their response.
The third thing is to test against staging with realistic data volumes on a schedule, not just at build time. Minimal test datasets miss edge cases that only show up at production scale.
Webhook reliability: verify signatures, handle replays
Webhooks require a different reliability model than polling. With polling, you control when you ask. With webhooks, the upstream API decides when to send data, and your job is to receive it reliably even during brief outages.
Verify every webhook payload signature before processing it. Most accounting and payment APIs include HMAC signatures in their headers. Stripe uses a Stripe-Signature header; QuickBooks Online uses a verifier token in the payload. An unverified webhook endpoint accepts spoofed events. For integrations that post journal entries on payment receipt, a spoofed payment event could trigger incorrect financial records.
Respond with 200 immediately, then process asynchronously. Stripe requires a response within 30 seconds; most accounting APIs enforce similar windows. If your processing involves slow database writes or downstream API calls, you'll miss the timeout and trigger retries. Acknowledge receipt first, then handle the payload in a background worker.
Expect replays. Providers retry on non-200 responses and network timeouts. Your handler needs to be idempotent: processing the same event twice should produce the same result as processing it once. Store the event ID and check it before processing to deduplicate.
When to stop building and start abstracting
These practices get one integration to production reliability. The maintenance math changes when you need five, then ten. Each one needs its own OAuth implementation, its own rate limit handler, its own pagination logic, its own drift monitoring.
Unified APIs normalize authentication, pagination, and data models across accounting platforms. You implement these patterns once against a single endpoint, and the connector layer handles the provider-specific implementations. For vertical SaaS teams supporting whatever accounting system their customers use, this is where the build vs. buy calculation changes. The API integration cost analysis covers that math in detail.
If you want to see how it works in practice, Apideck's 30-day free trial gives you access to the full connector catalog and sandbox environments for QuickBooks and Xero, along with 20+ other platforms.
Ready to get started?
Scale your integration strategy and deliver the integrations your customers need in record time.








