Why Ethereum Needs Sharding

Ethereum's base layer processes ~15-30 TPS because every full node validates every transaction. Sharding breaks that constraint — but not by running parallel execution chains. Here's what Ethereum is actually building and why.
Lewis Jackson
CEO and Founder

Ethereum's base layer processes roughly 15 to 30 transactions per second. That ceiling isn't a bug or a shortfall in engineering ambition — it's a structural consequence of how the network maintains security. Every full node validates every transaction. That constraint means the network's total capacity equals the capacity of one node, regardless of how many nodes you add.

Sharding is how Ethereum intends to break that constraint. But the sharding Ethereum is actually building today looks different from what was originally described. The original proposal was execution sharding — multiple chains running in parallel, each processing a subset of transactions. What emerged instead is data sharding — a narrower, cleaner approach designed around rollups. Understanding the difference explains both why sharding is necessary and why it took this particular shape.

The Fundamental Bottleneck

Every blockchain faces some version of the same problem: security requires nodes to verify independently, but independent verification means every node does the same work. You can't parallelize this without making tradeoffs.

In Ethereum's case, full nodes download and re-execute every transaction to verify the chain. This is computationally and bandwidth-intensive. It's also the source of the network's security guarantees — no transaction gets into the chain without being independently verified by thousands of nodes globally.

The cost is throughput. A single node can only process so many transactions per second before block validation becomes too slow, blocks get bloated, and the requirements to run a full node become prohibitive for individuals. Ethereum's current blocks are deliberately sized with this constraint in mind.

What Sharding Actually Does

In a sharded system, the database is split into partitions — each shard holds a portion of the total state and is validated by a subset of nodes. Instead of every node doing everything, work is divided.

Ethereum's original plan was to shard execution: create 64 shard chains, each running in parallel, each with its own transactions and state. The challenge was cross-shard communication — if a transaction touches state on multiple shards, coordinating that cleanly is genuinely hard. The design was complex and high-risk.

Around 2020 to 2021, the thinking shifted. Rollups — Layer 2 networks that process transactions off-chain and post compressed data back to Ethereum for verification — had matured significantly. If rollups could handle execution, Ethereum's own job became narrower: provide cheap, abundant, cryptographically verifiable data space for rollups to post to.

This is the insight that changed the roadmap. Ethereum didn't need to be a faster execution environment. It needed to be reliable, secure data availability infrastructure for rollups handling execution at scale.

How Data Sharding Works

Under the rollup-centric model, a network like Arbitrum or Optimism processes transactions off-chain, batches them, compresses them, and posts the compressed data back to Ethereum. Ethereum validators don't re-execute those transactions — they verify that the data was posted correctly and remains available long enough for anyone to reconstruct the rollup's state if needed.

This is significantly less work per transaction than executing everything natively, which is why rollups can offer much lower fees than Ethereum's base layer.

EIP-4844, known as Proto-Danksharding, activated in March 2024. It introduced blobs: a new, cheaper data format specifically for rollup batches. Blobs are posted to Ethereum, held for roughly 18 days, then pruned. They don't persist in the chain permanently, which makes them far cheaper than calldata — the format rollups previously had to use. Rollup transaction fees fell roughly 80–90% after EIP-4844 activated.

Full Danksharding — the complete architecture — would go further. The goal is many more blobs per block, enabled by data availability sampling: a technique where individual nodes can verify that a full blob is available without downloading all of it.

The mechanism uses erasure coding. When a blob is posted, redundant data is added such that if any 50% of the data is available, the full original blob can be reconstructed. Each node downloads a random sample. If samples consistently return, the node can be confident the full data is there — without downloading everything. This means blob capacity can scale without requiring each node to proportionally increase its bandwidth.

Where the Constraints Still Live

The binding constraint isn't storage broadly — it's the verification cost without downloading everything. Data availability sampling addresses this cryptographically, but implementing it correctly requires KZG polynomial commitments and erasure coding at scale. That's non-trivial.

A softer constraint is demand: as of early 2026, blob space isn't consistently full. The current caps — 3 to 6 blobs per block at activation — aren't binding most of the time. The urgency of increasing capacity depends on how quickly rollup adoption fills available space.

There's also a structural question worth noting: some rollups are posting data to alternative data availability layers (Celestia, EigenDA, Avail) rather than Ethereum. If rollup ecosystems migrate substantially to alternative DA layers, the demand driver for Ethereum's own data sharding changes. That's an open design question in the ecosystem, not a resolved one.

What Would Confirm This Direction

Blob count per block increasing above the current cap via hard forks without controversy. Rollup fees remaining materially lower than pre-EIP-4844 levels even during high load. Blob space approaching consistent fullness, creating demand pressure for capacity expansion. Data availability sampling reaching a mainnet-ready EIP stage.

What Would Break or Invalidate It

If blob space fills consistently and blob count increases stall in governance, rollup fees could approach pre-EIP-4844 levels — the constraint returns at a higher level. A more structural shift: if rollup ecosystems migrate substantially to alternative data availability layers at scale, Ethereum's sharding roadmap becomes less central to the actual scaling infrastructure. That doesn't invalidate the mechanism, but it changes the answer to who benefits.

Timing

Now: EIP-4844 is live. Blob-based data posting is active. Rollup fees are materially lower than before March 2024. Blob space isn't consistently full.

Next: Incremental blob count increases are in development. Continued rollup growth will expand demand for blob space, which determines how urgently capacity needs to increase.

Later: Full Danksharding with data availability sampling is a multi-year timeline. No firm deployment date. It requires simultaneous progress on several Ethereum roadmap items.

Boundary Statement

This post explains Ethereum's sharding rationale and the data sharding architecture being built. It doesn't cover rollup-specific mechanics in depth, the competing data availability layer ecosystem, or Ethereum's other roadmap items like single-slot finality or account abstraction. The distinction between where rollups post their data — Ethereum vs. alternatives — is a live architectural choice in the ecosystem that this explanation doesn't resolve.

The mechanism works as described. Whether it constitutes a thesis about any particular asset or protocol is outside this scope.

Related Posts

See All
Crypto Research
New XRP-Focused Research Defining the “Velocity Threshold” for Global Settlement and Liquidity
A lot of people looking at my recent research have asked the same question: “Surely Ripple already understands all of this. So what does that mean for XRP?” That question is completely valid — and it turns out it’s the right question to ask. This research breaks down why XRP is unlikely to be the internal settlement asset of CBDC shared ledgers or unified bank platforms, and why that doesn’t mean XRP is irrelevant. Instead, it explains where XRP realistically fits in the system banks are actually building: at the seams, where different rulebooks, platforms, and networks still need to connect. Using liquidity math, system design, and real-world settlement mechanics, this piece explains: why most value settles inside venues, not through bridges why XRP’s role is narrower but more precise than most narratives suggest how velocity (refresh interval) determines whether XRP creates scarcity or just throughput and why Ripple’s strategy makes more sense once you stop assuming XRP must be “the core of everything” This isn’t a bullish or bearish take — it’s a structural one. If you want to understand XRP beyond hype and price targets, this is the question you need to grapple with.
Read Now
Crypto Research
The Jackson Liquidity Framework - Announcement
Lewis Jackson Ventures announces the release of the Jackson Liquidity Framework — the first quantitative, regulator-aligned model for liquidity sizing in AMM-based settlement systems, CBDC corridors, and tokenised financial infrastructures. Developed using advanced stochastic simulations and grounded in Basel III and PFMI principles, the framework provides a missing methodology for determining how much liquidity prefunded AMM pools actually require under real-world flow conditions.
Read Now
Crypto Research
Banks, Stablecoins, and Tokenized Assets
In Episode 011 of The Macro, crypto analyst Lewis Jackson unpacks a pivotal week in global finance — one marked by record growth in tokenized assets, expanding stablecoin adoption across emerging markets, and major institutions deepening their blockchain commitments. This research brief summarises Jackson’s key findings, from tokenized deposits to institutional RWA chains and AI-driven compliance, and explains how these developments signal a maturing, multi-rail settlement architecture spanning Ethereum, XRPL, stablecoin networks, and new interoperability layers.Taken together, this episode marks a structural shift toward programmable finance, instant settlement, and tokenized real-world assets at global scale.
Read Now

Related Posts

See All
No items found.
Lewsletter

Weekly notes on what I’m seeing

A personal letter I send straight to your inbox —reflections on crypto, wealth, time and life.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.