JSON Size Analyzer & Treemap
Visualise JSON payload size by key, compare two objects, and get optimization suggestions.
How to Use the JSON Size Analyzer
- Analyze — paste any JSON object. The tool shows total size, a colour-coded treemap (block area = bytes), and a sortable table of every key with its path, type, byte count, and percentage share.
- Compare — paste two JSON objects side by side. The tool highlights keys that grew, shrank, or are missing in one version — useful for tracking payload changes between API versions.
- Optimize — paste your JSON to get actionable suggestions: null fields that could be omitted, empty arrays/strings, long key names, and estimated byte savings.
Why JSON Size Matters
Every byte of JSON that travels over the network costs time and money. Mobile users on 3G connections feel this acutely — a 100 KB API response takes over a second to download on a 1 Mbps connection. Even on fast broadband, large JSON payloads increase JavaScript parsing and memory allocation time. Google's Core Web Vitals metrics (LCP, INP) are directly affected by payload sizes, making JSON optimisation a legitimate SEO concern.
Understanding the Treemap
The treemap visualization represents each top-level key in your JSON as a coloured block. The area of each block is proportional to the number of bytes that key's value occupies. Deeply nested objects appear as a single block sized by their total serialised size. At a glance, you can immediately identify which field is the "elephant in the room" — the key responsible for most of your payload. This is particularly useful when your API returns a denormalised response with embedded related objects.
How JSON Size is Calculated
This tool measures size in UTF-8 bytes using the browser's Blob API: new Blob([string]).size. This is the accurate byte count you would see if you measured the Content-Length header of an uncompressed HTTP response. ASCII characters (Latin alphabet, digits, standard punctuation) occupy 1 byte each. Accented characters (like é, ü) occupy 2 bytes. Emoji and CJK characters occupy 3-4 bytes. This matters if your JSON contains user-generated content with non-ASCII text.
Common Optimisation Strategies
The single highest-impact technique for most APIs is enabling GZIP or Brotli compression on the HTTP layer — text-based formats like JSON compress by 70-90%. Beyond compression, consider: omitting null values (many serialisers have an option for this), using sparse fieldsets (like ?fields=id,name,email), shortening repetitive key names in large arrays of objects, and removing empty arrays/strings. For very high-volume APIs where every millisecond counts, binary formats like MessagePack or Protocol Buffers offer 20-50% smaller payloads than optimised JSON even after compression.
JSON in REST vs GraphQL
REST APIs often suffer from over-fetching — returning more fields than the client needs. GraphQL solves this by letting clients specify exactly which fields they want: { user { id name } } returns only id and name, not a full user object. This is conceptually the same problem this tool helps you diagnose. Once you identify your largest fields in the Analyze tab, you have the data to justify adding sparse fieldset support to your API or migrating specific endpoints to GraphQL. See also our JSON Formatter for formatting and validating the payloads you work with.