JSON Size Analyzer & Treemap

Visualise JSON payload size by key, compare two objects, and get optimization suggestions.

JSON Input
Paste JSON above to analyze its size breakdown.

How to Use the JSON Size Analyzer

  1. Analyze — paste any JSON object. The tool shows total size, a colour-coded treemap (block area = bytes), and a sortable table of every key with its path, type, byte count, and percentage share.
  2. Compare — paste two JSON objects side by side. The tool highlights keys that grew, shrank, or are missing in one version — useful for tracking payload changes between API versions.
  3. Optimize — paste your JSON to get actionable suggestions: null fields that could be omitted, empty arrays/strings, long key names, and estimated byte savings.

Why JSON Size Matters

Every byte of JSON that travels over the network costs time and money. Mobile users on 3G connections feel this acutely — a 100 KB API response takes over a second to download on a 1 Mbps connection. Even on fast broadband, large JSON payloads increase JavaScript parsing and memory allocation time. Google's Core Web Vitals metrics (LCP, INP) are directly affected by payload sizes, making JSON optimisation a legitimate SEO concern.

Understanding the Treemap

The treemap visualization represents each top-level key in your JSON as a coloured block. The area of each block is proportional to the number of bytes that key's value occupies. Deeply nested objects appear as a single block sized by their total serialised size. At a glance, you can immediately identify which field is the "elephant in the room" — the key responsible for most of your payload. This is particularly useful when your API returns a denormalised response with embedded related objects.

How JSON Size is Calculated

This tool measures size in UTF-8 bytes using the browser's Blob API: new Blob([string]).size. This is the accurate byte count you would see if you measured the Content-Length header of an uncompressed HTTP response. ASCII characters (Latin alphabet, digits, standard punctuation) occupy 1 byte each. Accented characters (like é, ü) occupy 2 bytes. Emoji and CJK characters occupy 3-4 bytes. This matters if your JSON contains user-generated content with non-ASCII text.

Common Optimisation Strategies

The single highest-impact technique for most APIs is enabling GZIP or Brotli compression on the HTTP layer — text-based formats like JSON compress by 70-90%. Beyond compression, consider: omitting null values (many serialisers have an option for this), using sparse fieldsets (like ?fields=id,name,email), shortening repetitive key names in large arrays of objects, and removing empty arrays/strings. For very high-volume APIs where every millisecond counts, binary formats like MessagePack or Protocol Buffers offer 20-50% smaller payloads than optimised JSON even after compression.

JSON in REST vs GraphQL

REST APIs often suffer from over-fetching — returning more fields than the client needs. GraphQL solves this by letting clients specify exactly which fields they want: { user { id name } } returns only id and name, not a full user object. This is conceptually the same problem this tool helps you diagnose. Once you identify your largest fields in the Analyze tab, you have the data to justify adding sparse fieldset support to your API or migrating specific endpoints to GraphQL. See also our JSON Formatter for formatting and validating the payloads you work with.

Frequently Asked Questions

JSON size directly affects API response times, bandwidth costs, and mobile data usage. Large payloads slow page load times, increase time-to-interactive, and can cause timeouts on slow connections. Optimising JSON size and enabling compression are complementary strategies for performance.
JSON size is measured in UTF-8 bytes. ASCII characters take 1 byte; non-ASCII characters (accents, emoji, CJK) take 2-4 bytes. This tool uses the Blob API (new Blob([str]).size) for accurate byte counts. Minified JSON is smaller than formatted JSON because whitespace is removed.
Remove null and undefined fields, use shorter key names, remove empty arrays and objects, enable GZIP/Brotli compression on the server, use GraphQL or sparse fieldsets to request only needed fields, and consider binary formats like MessagePack for high-volume APIs.
A treemap uses nested rectangles to show hierarchical data. The area of each rectangle is proportional to its value — here, the byte size of each JSON key. Larger blocks represent keys consuming more space, making it easy to spot which fields dominate the payload.
No. While gzip compresses JSON by 70-80%, reducing uncompressed size still helps — compression takes CPU time on the server and client, and smaller JSON means smaller in-memory JavaScript objects after parsing. Optimising JSON and enabling compression are complementary.