Cache Keys Explained: Preventing "Wrong Page" Bugs When You Cache at the Edge
“My site is showing the wrong page to visitors.” This support request almost always traces back to cache key misconfiguration. Cache keys determine which requests share cached responses—get them wrong and visitors see content meant for someone else. Understanding cache keys prevents these bugs before they happen.
What a cache key actually is
A cache key is the identifier an edge cache uses to store and retrieve responses. When a request arrives, the edge builds a cache key from request attributes and checks whether a cached response exists for that key. If it does, the cached response serves immediately. If not, the request goes to origin, and the response is stored under that cache key for future requests.
The simplest cache key is just the URL. Two requests for https://example.com/about/ produce the same cache key and receive the same cached response. This works perfectly for truly static content where every visitor sees identical output.
Problems emerge when the same URL should return different content for different visitors or conditions. A URL might return different HTML based on the visitor’s language, device type, login state, or geographic location. If the cache key only considers the URL, all these variations share one cached response—and most visitors see the wrong version.
How wrong-page bugs happen
The classic wrong-page bug: a logged-in admin visits a page, the edge caches the admin’s view (including admin toolbars, edit links, and potentially private content), then every anonymous visitor sees that cached admin view. This happens when the cache key doesn’t account for authentication state.
Query string variations cause subtler bugs. If cache keys ignore query strings, ?page=2 serves the same content as ?page=1. Pagination breaks. Search results all show the same results regardless of query. Filtered product listings ignore the selected filters.
Mobile versus desktop variations create wrong-layout bugs. If your site serves different HTML for mobile and desktop (server-side responsive), but the cache key doesn’t include device type, mobile visitors may see the desktop version or vice versa. User-agent-based cache key variations address this, but add complexity.
Cookie-dependent content causes the most dangerous bugs. If your site personalises content based on cookies—user preferences, A/B test assignments, membership levels—and cache keys don’t include relevant cookies, visitors share cached responses across different personalisation states.
Building correct cache keys
Include URL path and hostname as the base. This is standard and usually automatic. The path ensures different pages have different cache keys. The hostname handles multi-domain setups where the same path might serve different content on different domains.
Include query strings selectively. Cache all query parameters by default, then explicitly exclude irrelevant ones (tracking parameters like utm_source, fbclid, gclid). Excluding tracking parameters improves cache hit rates without causing wrong-content bugs. But never exclude functional parameters like page, search, category.
Add cookie presence (not values) for authentication states. Include whether authentication cookies exist in the cache key, creating separate cached versions for logged-in and logged-out states. For most sites, logged-in visitors should bypass cache entirely rather than maintaining separate cached versions.
Consider device type if you serve different HTML per device. Adding a mobile/desktop indicator to cache keys creates separate cached versions. This doubles cache storage but prevents cross-device content serving. If your site uses responsive CSS (same HTML for all devices), this isn’t needed.
Cache key components and their tradeoffs
Every component added to a cache key increases cache fragmentation. More unique cache keys means more stored responses, lower hit rates, and more origin traffic. The tradeoff is correctness versus efficiency.
A URL-only cache key maximises hit rates but risks serving wrong content for any URL with variable output. A cache key including URL, cookies, user-agent, and geographic location is very correct but may fragment cache so much that hit rates plummet and the cache provides minimal benefit.
Find the minimum cache key components that prevent wrong-content bugs. Include what’s necessary for correctness, exclude what doesn’t affect content. For a standard WordPress blog with no personalisation, URL plus query parameters is sufficient. For an e-commerce site with logged-in users and geographic pricing, the key needs more components.
Testing cache key configuration
Test methodically after any cache key change. Visit a page as an anonymous user, verify the cached version. Then visit as a logged-in user—you should see dynamic content, not the anonymous cached version. Check pagination, search results, and filtered views to confirm query parameters work correctly.
Use Cloudflare’s CF-Cache-Status response header to verify caching behaviour. HIT means served from cache. MISS means fetched from origin and now cached. BYPASS means cache was skipped. DYNAMIC means the response wasn’t eligible for caching.
Test from multiple locations if geographic variation is a factor. A request from the US and UK should serve appropriate cached versions if your content varies by region. Single-location testing misses geographic cache key issues.
Monitor after deployment. Even thorough testing can miss edge cases. Watch for user reports of incorrect content, check analytics for unusual bounce rates (which might indicate visitors seeing wrong pages), and review cache hit rates to ensure fragmentation isn’t excessive.
Platform-specific cache key defaults
Cloudflare’s default cache key includes the full URL (scheme, host, path, and query string). This works for most static content. For dynamic WordPress content, you need to add cookie-based bypass rules or cache key modifications to handle authenticated states.
Fastly uses VCL to construct cache keys with full control over what’s included. This power enables precise cache key construction but requires more expertise. Each variation you account for must be explicitly coded in VCL.
AWS CloudFront allows forwarding specific headers, cookies, and query strings to origin, which implicitly become part of the cache key. Forwarding more attributes increases correctness but reduces hit rates—the same tradeoff.
The practical approach
Start with sensible defaults: full URL as cache key, bypass cache for authenticated users via cookie detection, and cache everything else. This handles 90% of WordPress sites correctly with minimal configuration.
Add cache key components only when you discover wrong-content bugs or have specific requirements (geographic variation, device-specific HTML, A/B testing). Each addition should solve a documented problem, not anticipate theoretical issues.
Monitor cache hit rates after changes. If adding a cache key component drops hit rates significantly, evaluate whether the added correctness justifies reduced caching effectiveness. Sometimes restructuring content delivery (client-side personalisation instead of server-side) eliminates the need for complex cache keys.
Cache keys are the invisible mechanism that makes edge caching either powerful or dangerous. Getting them right requires understanding your content variation patterns. Getting them wrong exposes visitors to incorrect, stale, or private content. For sites with complex caching requirements, a professional performance review can map your content variations and design cache key strategies that balance correctness with efficiency.