---
title: "Caching, Sync & Performance Limits"
description: "Handle SDK timeouts, resolve stale content with cache policies, and manage 429 rate limits or entry deletion errors in the Sync API."
url: "https://www.contentstack.com/docs/headless-cms/caching-sync-performance-limits"
product: "Contentstack"
doc_type: "guide"
audience:
  - developers
  - admins
version: "current"
last_updated: "2026-05-12"
---

# Caching, Sync & Performance Limits

## 1\. SDK Timeout on Large Asset Content Fetching

Large payloads (many assets/references) can trigger request timeout failures.

**Root Cause**

Single requests attempting to fetch massive payloads (e.g., 100+ deep references) exceed the default network timeout limits of the SDK or environment.

**Resolution**

1.  Reduce payload size with pagination (limit/skip) and smaller batches.
2.  Increase SDK timeout only as needed for network conditions.
3.  Split deep data hydration into phased requests instead of one oversized query.

const result = await stack
  .contentType('article')
  .entry()
  .query()
  .limit(20)
  .skip(0)
  .find();

Paginated requests return 200 consistently with no timeout error. Escalate if timeouts persist on small paginated batches; include region, timeout setting, and stack UID.

* * *

## 2\. SDK Cache Synchronization Issues on Live Environments

Stale content appears when cache policy/persistence settings prioritize cache over freshness.

**Root Cause**

The SDK’s cache policy is set to prioritize local persistence (e.g., CACHE\_ELSE\_NETWORK) over real-time API data, causing the application to serve stale content.

**Resolution**

1.  Use modern cache policy configuration (cacheOptions.policy) for your freshness requirement.
2.  Prefer network-first patterns for dynamic/live content paths.
3.  Ensure persistence store TTL/maxAge and cache invalidation strategy are intentional.
4.  Remove guidance relying on legacy/non-standard cache clearing methods.

cacheOptions: {
  policy: Policy.NETWORK\_ELSE\_CACHE,
  persistenceStore: new PersistenceStore({ storeType: 'localStorage', maxAge: 3600000 })
}

Recently updated entries return latest updated\_at/content after policy changes. Escalate with cache policy, persistence config, and timestamps of publish vs fetch.

* * *

## 3\. 429 Too Many Requests During SDK-Driven Bulk Operations

High-concurrency scripts hit platform/API rate limits and receive 429.

**Root Cause**

High-concurrency scripts exceed the platform’s rate limits by sending too many simultaneous requests without exponential backoff or throttling.

**Resolution**

1.  Use retry with exponential backoff in application logic.
2.  Reduce parallelism and batch requests.
3.  Use SDK-supported bulk operation endpoints/methods where applicable.
4.  In CMA JS flows, configure retry settings intentionally (retryOnError, retryLimit).

const client = contentstack.client({
  authtoken: process.env.CS\_AUTHTOKEN,
  retryOnError: true,
  retryLimit: 5
});

Bulk workflow completes successfully without terminal 429 failures. Escalate if 429 appears at low request volume; share request rate, source IP, and stack UID.

* * *

## 4\. Handling "Entry Deleted" Errors During SDK Sync API Calls

The SDK's Sync API returns an error or stops processing when it encounters a deletion event in the sync queue.

**Root Cause**

The application logic fails to distinguish between entry updates and entry\_deleted event types in the Sync API response, leading to processing errors for non-existent UIDs.

**Resolution**

Sync consumers break when deletion events are treated like normal entry payloads.

Use one of the following supported patterns based on your sync architecture:

1.  **Client-side event switch (default):**
    *   Process all syncData.items by item.type
    *   Remove local records for entry\_deleted
2.  **Server-filtered delete sync jobs:**
    *   Run stack.sync({ type: 'entry\_deleted' }) for cleanup-focused workers
3.  **Tokened incremental strategy (recommended at scale):**
    *   Drain batches with pagination\_token
    *   Persist and continue with sync\_token for delta runs
    *   Apply delete events before any re-fetch/re-hydration logic

const syncData = await stack.sync({ syncToken: lastSyncToken });
for (const item of syncData.items) {
  if (item.type === 'entry\_deleted') {
    // remove from local store
  }
}

// Example server-filtered delete run:
const deletedOnly = await stack.sync({ type: 'entry\_deleted' });

Sync completes with 200, delete events are consumed without exceptions, and local state no longer contains deleted entry UIDs after reconciliation. Escalate with sync mode used (full/mixed/deleted-only), sync\_token/pagination\_token, failing item payload, and local-store reconciliation logs.