How I Escaped the localStorage Limit Without Rewriting My App
Keeping synchronous APIs while moving heavy data into IndexedDB with an in-memory cache.
I migrated large PGN and engine datasets off localStorage by preloading IndexedDB into memory, preserving synchronous reads and enabling far larger storage.
When I was recently auditing my web app’s storage usage, I exported a backup of the site’s data and started digging through it. That’s when I noticed a problem.
The backup was huge — far larger than I expected for a client-side app. After inspecting the data more closely, it became clear what was happening.
Two types of data were quietly consuming almost all of my browser storage quota:
- PGN game records (chess game notation strings)
- Move-by-move engine analysis data, including PV arrays, evaluation values, and node counts
Those two categories alone were enough to push my app right up against the localStorage limit, which most browsers cap around 5–10 MB per origin. And once you hit that limit, things start breaking in unpleasant ways.
The problem with localStorage
localStorage is convenient because it’s simple and synchronous. You can write code like this anywhere in your app:
const analysis = localStorage.getItem("full_analysis_v2_123")
No async calls. No promises. No callbacks.
But that convenience comes with a hard constraint: localStorage has a tiny quota. If your application stores structured or historical data — things like chat logs, notes, analytics data, game histories, or engine analysis — you will eventually run out of space. That’s exactly what happened in my case.
The obvious replacement: IndexedDB
The natural replacement for localStorage is IndexedDB:
- It can store hundreds of megabytes or more
- It supports structured data
- It’s designed for large client-side datasets
But IndexedDB introduces a different problem: it’s asynchronous. That means the simple synchronous call above becomes something more like this:
const analysis = await idb.get("full_analysis_v2_123")
And that difference matters more than it looks.
The real migration problem
My app already had 30+ call sites that accessed storage synchronously through helper functions such as getArchivedPGN() and getFullAnalysis(). Refactoring all of those to async would mean changing function signatures, introducing async chains throughout the codebase, and touching a large portion of the app. That’s not a small change.
I needed a way to move persistence to IndexedDB without rewriting the rest of the application.
The architecture that solved it
The solution was surprisingly simple. Instead of replacing localStorage directly, I introduced an in-memory read layer:
- IndexedDB became the persistence layer
Map<string, string>instances became the synchronous read cache
Two maps hold the active dataset in memory during a session. Reads stay synchronous because the data is already cached. Writes update both the cache and IndexedDB.
How it works
-
Preload data at startup: When the application launches, a preload step reads existing data from IndexedDB and populates the in-memory maps. IndexedDB → preload → Map. Once that finishes, the rest of the app reads from memory. This happens during app initialization so the rest of the code doesn’t care about IndexedDB at all.
-
Reads stay instant: All synchronous calls simply read from the map.
export function getArchivedPGN(id: string) {
return archiveCache.get(id)
}
Because the data is already in memory, reads remain extremely fast and no existing code needs to become async.
- Writes update the cache first: When new data is stored, the map updates immediately and IndexedDB writes fire off in the background.
export function setArchivedPGN(id: string, pgn: string) {
archiveCache.set(id, pgn)
writeToIndexedDB(id, pgn) // async fire-and-forget
}
This effectively creates a write-through cache.
Migrating existing data
Older versions of the app stored everything in localStorage with keys like archive_pgn_* and full_analysis_v2_*. On the first load after deploying the new system, those entries are copied into IndexedDB and the old keys are removed. Storage quota is immediately reclaimed. This migration happens automatically and only once, so from the user’s perspective nothing changes.
Handling browser edge cases
IndexedDB isn’t always available — some private browsing modes disable it or restrict its usage. To avoid breaking the app:
- The map cache still works within the session
- localStorage remains a fallback read path
That means the application still functions even if IndexedDB isn’t accessible.
The result
After the migration, the localStorage bottleneck disappeared, large datasets moved to IndexedDB, existing synchronous APIs continued working unchanged, and users didn’t lose any data. Most importantly, the architecture now scales far beyond the original 5 MB limit.
The broader lesson
localStorage is great for small preferences and settings. But once your application starts storing real datasets, you need something larger. The pattern that worked well for me was IndexedDB for persistence plus an in-memory Map for synchronous reads. It preserves the simplicity of synchronous APIs while unlocking much larger client-side storage.