Building a High-Performance Navigation System: A Client-Side Caching and Service Worker Guide

By

Overview

When you're working through a backlog—opening an issue, jumping to a linked thread, then back to the list—latency isn't just a metric. It's a context switch. Even small delays add up, and they hit hardest at the exact moments developers are trying to stay in flow. It's not that GitHub Issues was “slow” in isolation; it's that too many navigations still paid the cost of redundant data fetching, breaking flow again and again.

Building a High-Performance Navigation System: A Client-Side Caching and Service Worker Guide
Source: github.blog

Earlier this year, we set out to fix that—not by chasing marginal backend wins, but by changing how issue pages load end-to-end. Our approach was to shift work to the client and optimize perceived latency: render instantly from locally available data, then revalidate in the background. To make that work, we built a client-side caching layer backed by IndexedDB, added a preheating strategy to improve cache hit rates without spamming requests, and introduced a service worker so cached data remains usable even on hard navigations.

In this guide, we'll walk through how the system works and what changed in practice. We'll cover the metric we optimized for; the caching and preheating architecture; how the service worker speeds up navigation paths that used to be slow; and the results across real-world usage. We'll also dig into the tradeoffs—because this approach isn't free—and what still needs to happen to make “fast” the default across every path into Issues. If you're building a data-heavy web app, these patterns are directly transferable: you can apply the same model to reduce perceived latency in your own system without waiting for a full rewrite.

Prerequisites

Before diving into the implementation, ensure you have a solid understanding of the following:

  • Web development fundamentals: HTML, CSS, JavaScript (ES6+).
  • Service Workers: Basic knowledge of registration, lifecycle, and caching strategies.
  • IndexedDB: Familiarity with asynchronous database operations and object stores.
  • Performance metrics: Understanding of perceived latency, Time to Interactive (TTI), and First Contentful Paint (FCP).
  • Developer tools: Chrome DevTools or similar for debugging service workers and IndexedDB.

Step-by-Step Instructions

1. Define the Metric to Optimize

The primary metric we targeted was perceived latency—the time between a user's action (e.g., clicking an issue) and seeing meaningful content. Traditional metrics like TTI ignore the psychological impact of waiting. We aimed for instant rendering from cached data, followed by background revalidation.

// Example: Measuring perceived latency with the Performance API
const start = performance.now();
// Render cached data immediately
showIssueFromCache();
// Then fetch fresh data
fetchFreshData().then(() => {
  const elapsed = performance.now() - start;
  console.log(`Perceived latency: ${elapsed}ms`);
});

2. Build a Client-Side Caching Layer with IndexedDB

We chose IndexedDB over localStorage for its asynchronous nature and larger storage limits. The caching layer stores issue data, metadata, and timestamps to support cache-first strategies.

// Open IndexedDB database
const dbPromise = idb.open('issues-cache', 1, (upgradeDb) => {
  const store = upgradeDb.createObjectStore('issues', { keyPath: 'id' });
  store.createIndex('updatedAt', 'updatedAt', { unique: false });
});

async function getIssueFromCache(id) {
  const db = await dbPromise;
  return db.transaction('issues').objectStore('issues').get(id);
}

async function setIssueCache(id, data) {
  const db = await dbPromise;
  const tx = db.transaction('issues', 'readwrite');
  const store = tx.objectStore('issues');
  await store.put({ id, ...data, updatedAt: Date.now() });
  await tx.complete;
}

3. Implement a Preheating Strategy

Preheating improves cache hit rates by prefetching likely-to-be-used data without flooding the network. We analyzed navigation patterns—e.g., listing issues frequently leads to opening the first few—and prefetched those entries after the initial page load.

// Preheating: prefetch issues from current list after load
window.addEventListener('load', () => {
  const issueLinks = document.querySelectorAll('.issue-list a');
  // Prefetch first 5 issues
  const linksToPrefetch = Array.from(issueLinks).slice(0, 5);
  linksToPrefetch.forEach(link => {
    const url = link.href;
    // Use a non-blocking fetch and cache result
    fetch(url).then(response => {
      if (response.ok) {
        response.json().then(data => setIssueCache(extractId(url), data));
      }
    }).catch(() => {}); // Fail silently
  });
});

4. Integrate a Service Worker for Hard Navigations

Service workers intercept network requests and can serve cached responses even when navigation originates from an external link or browser refresh. We registered a service worker to handle fetch events for issue pages.

Building a High-Performance Navigation System: A Client-Side Caching and Service Worker Guide
Source: github.blog
// service-worker.js
self.addEventListener('install', (event) => {
  self.skipWaiting();
});

self.addEventListener('activate', (event) => {
  event.waitUntil(clients.claim());
});

self.addEventListener('fetch', (event) => {
  if (event.request.url.includes('/issues/')) {
    event.respondWith(
      caches.match(event.request).then((cachedResponse) => {
        if (cachedResponse) {
          return cachedResponse;
        }
        // Fetch from network and cache
        return fetch(event.request).then((response) => {
          return caches.open('issues-v1').then((cache) => {
            cache.put(event.request, response.clone());
            return response;
          });
        });
      })
    );
  }
});

5. Render Instantly from Local Data, Then Revalidate

When a user navigates to an issue, we first render the cached version while simultaneously fetching the latest data. After the fetch completes, we update the UI if needed.

async function navigateToIssue(id) {
  // Show cached data immediately
  const cachedData = await getIssueFromCache(id);
  if (cachedData) {
    renderIssue(cachedData);
  }
  // Revalidate in background
  try {
    const freshData = await fetch(`/issues/${id}`).then(res => res.json());
    setIssueCache(id, freshData);
    if (freshData.updatedAt > cachedData.updatedAt) {
      renderIssue(freshData);
    }
  } catch (error) {
    // Handle network failure gracefully
    console.warn('Revalidation failed, using cached version');
  }
}

Common Mistakes

Stale Data and Cache Invalidation

Using cached data risks displaying stale information. To mitigate, attach timestamps to each cache entry and revalidate on every navigation. Consider implementing a staleness threshold—show cached data instantly but flag if older than, say, 30 seconds.

Over-Preheating Wasting Bandwidth

Preheating too many resources can degrade performance on slower connections. Use analytics to determine the optimal number of prefetches based on user behavior patterns, and respect the navigator.connection API (e.g., save-data header).

Service Worker Scope Mismatch

If your service worker is registered on a different scope than your issue pages, it won't intercept those requests. Ensure the service worker is registered at the root or appropriate subdirectory and that the fetch event filters correctly.

IndexedDB Transaction Conflicts

Multiple concurrent writes to the same record can cause conflicts. Use a queue or implement a last-write-wins strategy with timestamps to avoid race conditions.

Summary

Optimizing perceived latency in a data-heavy application like GitHub Issues requires a shift from server-side rendering to client-side caching and service workers. By building a IndexedDB cache layer, preheating likely data, and using a service worker for seamless offline-capable navigation, we reduced navigation times from seconds to near-instant. The tradeoffs—stale data, bandwidth, and complexity—are manageable with careful design. Apply these patterns to your own web app to deliver the speed that users expect in 2026.

Tags:

Related Articles

Recommended

Discover More

How Docker Built a Virtual Agent Fleet to Ship Faster: Inside the Coding Agent Sandboxes TeamxStocks Surpasses $100M on Ethereum and $30M on BNB Chain as Tokenized Asset Demand SkyrocketsNavigating Airline Turbulence: A Guide to Protecting Your Travel Plans When an Airline Faces Collapse – The Spirit Airlines Case StudyAI Clone Technology Sparks New Ethical Crisis as Workers Digitally Replicate BossesNew X-ray Method Unveils Secrets of Vitamin B12 in Dilute Solutions