Back to blog
Guides & Tutorials

What is API Pagination?

Struggling with slow APIs and massive JSON payloads? API pagination is the key to scaling performance, cutting response times, and keeping clients fast and reliable. Learn how to implement pagination strategies (offset, cursor, page-based) that handle millions of records without crashing your server or browser.

Saurabh RaiSaurabh Rai

Saurabh Rai

11 min read
What is API Pagination?

You've just built an API endpoint that returns user data. Works great in development with 50 test records. Then production hits, and suddenly you're trying to return 100,000 user records in a single response. Your server chokes, the client browser freezes, and your monitoring dashboard lights up like a Christmas tree. Sound familiar?

This is where API pagination saves your bacon. Let's dig into what it actually is, why you need it, and how to implement it properly.

Understanding API Pagination

API pagination splits large datasets into smaller, sequential chunks. Instead of dumping 100,000 records in one massive JSON response, you return manageable pages of data. Think of it like breaking a 1000-page book into chapters instead of forcing readers to consume everything at once.

At its core, pagination involves three key components:

  1. Page size: How many items to return per request
  2. Position marker: Where you are in the dataset
  3. Navigation method: How to move between pages

The position marker varies based on your pagination strategy. Could be a simple page number, an offset value, or an opaque cursor token. The navigation method determines how clients request the next chunk of data.

Here's what pagination looks like in practice. Without pagination:

GET /api/users
Returns: 100,000 user objects (50MB response, 8-second load time)

With pagination:

GET /api/users?limit=100&page=1
Returns: 100 user objects (50KB response, 200ms load time)

The difference is night and day for both server performance and client experience.

Why Use API Pagination? The Real Benefits

Let's cut through the fluff and talk about why pagination actually matters for production systems.

Server Resource Management

Your database can handle a SELECT query for a million rows. But serializing those million rows into JSON, holding them in memory, and transmitting them over the network? That's where things fall apart.

Without pagination, a single heavy request can:

  • Spike memory usage to dangerous levels
  • Block other requests while processing
  • Trigger timeout errors under load
  • Create unpredictable response times.

With pagination, you maintain consistent, predictable resource usage. Each request handles a fixed maximum amount of data. Your ops team will thank you when the server doesn't crash during Black Friday traffic.

Network Optimization

Consider mobile users on spotty internet connections. Downloading a 50MB JSON response isn't just slow; it might fail entirely. Network interruptions, proxy timeouts, and data caps all become real problems with large responses.

Pagination keeps response sizes reasonable. A 100KB paginated response downloads reliably even on poor connections. Users see data faster, retry less often, and consume less bandwidth.

Client Performance

JavaScript applications struggle with large datasets. Parsing massive JSON payloads blocks the main thread. Rendering thousands of DOM elements destroys scrolling performance. Memory usage balloons until the browser tab crashes.

Paginated data arrives in digestible chunks. The UI stays responsive. Virtual scrolling and infinite scroll patterns become possible. Users can actually interact with your application instead of watching it freeze.

Database Query Optimization

Database queries with LIMIT clauses run faster than unbounded queries. The query planner can optimize better. Indexes work more efficiently. You avoid expensive full table scans.

This becomes critical with complex queries involving joins, aggregations, or sorting. The difference between SELECT * FROM orders and SELECT * FROM orders LIMIT 100 can be seconds versus milliseconds.

Caching Opportunities

Small, paginated responses cache well. You can cache individual pages at the CDN level, in Redis, or in browser storage. Cache invalidation becomes granular: update only affected pages instead of busting the entire dataset cache.

JavaScript Implementation: Code That Actually Works

Let's build pagination implementations for both the client and server sides. I'll show you patterns that work in production, not just tutorials.

Client-Side: Fetching Paginated Data

Here's a robust pagination handler for the frontend:

class PaginatedAPIClient {
  constructor(baseURL, pageSize = 20) {
    this.baseURL = baseURL;
    this.pageSize = pageSize;
    this.cache = new Map();
  }

  async fetchPage(pageNumber, options = {}) {
    const cacheKey = `${pageNumber}-${JSON.stringify(options)}`;

    // Return cached page if available
    if (this.cache.has(cacheKey) && !options.forceRefresh) {
      return this.cache.get(cacheKey);
    }

    const params = new URLSearchParams({
      page: String(pageNumber),
      limit: String(this.pageSize),
      ...(options.filters || {}),
    });

    try {
      const response = await fetch(`${this.baseURL}?${params}`, {
        signal: options.signal, // Support request cancellation
      });

      if (!response.ok) {
        throw new Error(`HTTP ${response.status}: ${response.statusText}`);
      }

      const data = await response.json();

      // Cache successful responses
      this.cache.set(cacheKey, data);

      return {
        items: data.items,
        currentPage: pageNumber,
        totalPages: data.totalPages,
        totalItems: data.totalItems,
        hasNext: pageNumber < data.totalPages,
        hasPrevious: pageNumber > 1,
      };
    } catch (error) {
      console.error("Pagination fetch failed:", error);
      throw error;
    }
  }

  // Fetch multiple pages concurrently
  async fetchPageRange(startPage, endPage) {
    const pagePromises = [];
    for (let page = startPage; page <= endPage; page++) {
      pagePromises.push(this.fetchPage(page));
    }
    return Promise.all(pagePromises);
  }

  clearCache() {
    this.cache.clear();
  }
}

// Usage example
const apiClient = new PaginatedAPIClient("/api/products");

// Simple page fetch
const page1 = await apiClient.fetchPage(1);
console.log(`Showing ${page1.items.length} of ${page1.totalItems} products`);

// Prefetch next page for smooth scrolling
if (page1.hasNext) {
  apiClient.fetchPage(2); // Prefetch silently
}

Server-Side: Node.js Pagination with Error Handling

Here's a production-ready Express endpoint with offset pagination:

// Pagination middleware
function paginationMiddleware(req, res, next) {
  const page = parseInt(req.query.page, 10) || 1;
  const limit = parseInt(req.query.limit, 10) || 20;

  // Enforce reasonable limits
  const maxLimit = 100;
  const validLimit = Math.min(Math.max(1, limit), maxLimit);
  const skip = (Math.max(1, page) - 1) * validLimit;

  req.pagination = {
    page: Math.max(1, page),
    limit: validLimit,
    skip,
    offset: skip, // Alias for skip
  };

  next();
}

// Pagination response builder
function buildPaginatedResponse(data, totalCount, pagination) {
  const totalPages = Math.ceil(totalCount / pagination.limit);
  const currentPage = pagination.page;

  return {
    data,
    pagination: {
      currentPage,
      pageSize: pagination.limit,
      totalPages,
      totalItems: totalCount,
      hasNextPage: currentPage < totalPages,
      hasPreviousPage: currentPage > 1,
    },
    links: {
      first: `?page=1&limit=${pagination.limit}`,
      last: `?page=${totalPages}&limit=${pagination.limit}`,
      next:
        currentPage < totalPages
          ? `?page=${currentPage + 1}&limit=${pagination.limit}`
          : null,
      previous:
        currentPage > 1
          ? `?page=${currentPage - 1}&limit=${pagination.limit}`
          : null,
    },
  };
}

// Actual endpoint implementation
app.get("/api/products", paginationMiddleware, async (req, res) => {
  try {
    const { skip, limit } = req.pagination;
    const filters = buildFilters(req.query); // Your filter logic

    // Parallel execution for performance
    const [products, totalCount] = await Promise.all([
      Product.find(filters)
        .sort({ createdAt: -1 })
        .skip(skip)
        .limit(limit)
        .lean(), // Faster queries with lean()
      Product.countDocuments(filters),
    ]);

    const response = buildPaginatedResponse(products, totalCount, req.pagination);

    // Set cache headers for GET requests
    res.set("Cache-Control", "private, max-age=60");
    res.json(response);
  } catch (error) {
    console.error("Pagination error:", error);
    res.status(500).json({
      error: "Failed to fetch paginated data",
      message: process.env.NODE_ENV === "development" ? error.message : undefined,
    });
  }
});

Cursor-Based Pagination for Real-Time Data

When dealing with frequently changing datasets, cursor-based pagination provides consistency:

// Cursor-based implementation
app.get("/api/feed", async (req, res) => {
  const limit = Math.min(parseInt(req.query.limit, 10) || 20, 100);
  const cursor = req.query.cursor;

  try {
    let query = {};

    // Decode and apply cursor if provided
    if (cursor) {
      const decodedCursor = Buffer.from(cursor, "base64").toString("utf-8");
      const cursorData = JSON.parse(decodedCursor);
      query = {
        _id: { $lt: cursorData.lastId },
      };
    }

    const posts = await Post.find(query)
      .sort({ _id: -1 })
      .limit(limit + 1) // Fetch one extra to check if more exist
      .lean();

    const hasMore = posts.length > limit;
    const items = hasMore ? posts.slice(0, -1) : posts;

    let nextCursor = null;
    if (hasMore && items.length > 0) {
      const lastItem = items[items.length - 1];
      const cursorData = { lastId: lastItem._id };
      nextCursor = Buffer.from(JSON.stringify(cursorData)).toString("base64");
    }

    res.json({
      items,
      nextCursor,
      hasMore,
    });
  } catch (error) {
    console.error("Cursor pagination error:", error);
    res.status(500).json({ error: "Failed to fetch feed" });
  }
});

How Apideck Handles Pagination Across 200+ APIs

Apideck provides a unified API that connects to over 200 different third-party APIs. Each of these APIs uses different pagination methods. HubSpot uses cursors, Pipedrive uses offsets, and Microsoft Dynamics uses link-based pagination. We achieve this by abstracting all pagination strategies behind cursor-based pagination, utilizing base64-encoded cursors. You can read more about how we accomplish this on our detailed guide here.

The Cursor Abstraction Layer

When you request data from Apideck's unified API, you always use the same pagination interface:

# First request
GET https://unify.apideck.com/crm/leads?limit=50

# Response includes an encoded cursor
{
  "data": [...],
  "meta": {
    "cursors": {
      "next": "cGlwZWRyaXZlOjpvZmZzZXQ6OjUw"
    }
  }
}

# Next page request
GET https://unify.apideck.com/crm/leads?limit=50&cursor=cGlwZWRyaXZlOjpvZmZzZXQ6OjUw

That cursor cGlwZWRyaXZlOjpvZmZzZXQ6OjUw decodes to pipedrive::offset::50. Apideck's backend recognizes this format and translates it to Pipedrive's native pagination: GET https://api.pipedrive.com/v1/leads?start=50&limit=50.

For HubSpot's cursor-based API, the cursor might decode to hubspot::cursor::7151. For page-based APIs like Copper, it becomes copper::page::5.

API Augmentation Beyond Native Limits

Here's where it gets clever. Some APIs limit responses to 100 items per request. Apideck lets you request up to 200 items. How? They make multiple parallel requests behind the scenes and stitch the results together.

Your request for 200 items might trigger two backend calls:

  1. Fetch items 1-100 from the third-party API
  2. Fetch items 101-200 from the third-party API
  3. Combine results and return with a single cursor for position 200

This augmentation happens transparently. You get consistent behavior across all integrated APIs, regardless of their individual limitations.

Consistency Across Diverse Integrations

Every Apideck API endpoint uses the same pagination pattern:

  • cursor parameter for position
  • limit parameter for page size
  • Consistent response structure with cursors and links
  • Predictable error handling

Whether you're fetching CRM leads, ATS applications, or e-commerce orders, the pagination interface remains identical. Learn once, use everywhere.

Ready to Stop Wrestling with API Pagination?

Building robust pagination is complex enough for a single API. Managing it across hundreds of different APIs with varying pagination styles, rate limits, and data structures? That's a full-time engineering project.

This is precisely why Apideck's Unified APIs make sense. Instead of building and maintaining pagination logic for every integration, you get:

  1. One pagination pattern to rule them all: Consistent cursor-based pagination across 200+ integrations. No more switching between offset, page, and cursor strategies based on which API you're calling.

  2. Automatic optimization: Apideck handles the complexity of making multiple backend calls, managing rate limits, and stitching results together. Your code stays clean while getting maximum performance.

  3. Future-proof integrations: When Salesforce changes its API or you need to add HubSpot integration, your pagination code doesn't change. Apideck handles the translation layer.

Skip the months of building custom pagination handlers for each integration. Explore Apideck's Unified APIs to focus on building features that truly differentiate your product.

Ready to get started?

Scale your integration strategy and deliver the integrations your customers need in record time.

Ready to get started?
Talk to an expert

Trusted by fast-moving product & engineering teams

Nmbrs
Benefex
Invoice2go by BILL
Trengo
Ponto | Isabel Group
Apideck Blog

Insights, guides, and updates from Apideck

Discover company news, API insights, and expert blog posts. Explore practical integration guides and tech articles to make the most of Apideck's platform.

How to Get Your Perplexity API Key
AIGuides & Tutorials

How to Get Your Perplexity API Key

This guide walks you through how to get your Perplexity API key and start building with it. From there, it explains the must-know compliance rules and best practices, like showing citations, respecting source policies, handling rate limits, and caching responses, so you can use the API securely and effectively.

Kateryna Poryvay

Kateryna Poryvay

8 min read
Understanding Authorization When Building a Microsoft Business Central API Integration
Unified APIAccounting

Understanding Authorization When Building a Microsoft Business Central API Integration

Building a Microsoft Business Central API integration means dealing with Azure AD registration, OAuth nuances, and complex permission models. Learn how unified APIs like Apideck cut development time, abstract away authentication challenges, and deliver reliable ERP integrations faster.

Saurabh Rai

Saurabh Rai

15 min read
How to create a Workday REST API Integration?
Unified APIAccountingHRIS

How to create a Workday REST API Integration?

Building a Workday API integration means handling OAuth, SOAP fallbacks, ISU maintenance, and compliance challenges. Learn why direct integration is complex and how unified APIs simplify Workday integrations into a faster, scalable solution.

Saurabh Rai

Saurabh Rai

12 min read