Our job listing page took 18-30 seconds on cache miss. With stale-while-revalidate, every request returns in under 50ms — even when the data is expired.
The Problem
// Without caching: 18-30 seconds on a 1M row table
const jobs = await pool.query(`
SELECT * FROM jobs
WHERE visibility = 'public'
ORDER BY COALESCE(posted_at, scraped_at) DESC
LIMIT 50 OFFSET $1
`, [offset]);
First request after a restart: 18 seconds. Users see a blank page.
The Pattern
class PublicJobsCacheService {
static async getCachedPage(page: number): Promise<CachedResult | null> {
const cached = await redis.get(`jobs:page:${page}`);
if (!cached) return null;
const data = JSON.parse(cached);
const isStale = Date.now() - data.timestamp > CACHE_TTL;
if (isStale) {
// Return stale data NOW, refresh in background
this.refreshInBackground(page);
}
return {
...data,
meta: { isStale }
};
}
private static async refreshInBackground(page: number) {
// Don't await - fire and forget
this.calculateAndCachePage(page).catch(err => {
console.error('Background refresh failed:', err);
});
}
}
The key insight: Return stale data immediately, refresh asynchronously. The user gets an instant response with slightly old data (max 5 minutes stale). The next user gets fresh data.
The Results
| Scenario | Before | After |
|---|---|---|
| Cache hit (fresh) | N/A | < 50ms |
| Cache hit (stale) | N/A | < 50ms + background refresh |
| Cache miss (cold start) | 18-30s | 18-30s (first request only) |
| Average response | 18-30s | < 50ms |
After the initial cold start, users never see a slow load again.
The Response Headers
res.setHeader('X-Cache', cached.meta.isStale ? 'STALE' : 'HIT');
res.setHeader('Cache-Control', 'public, max-age=300, s-maxage=600');
The X-Cache header lets us monitor stale vs fresh hit ratios. In practice, 98% of requests are served from cache.
This pattern powers MisuJob, serving 1M+ job listings with sub-50ms response times.
What caching patterns do you use in production? SWR, read-through, write-behind?

