Ethereum's base layer processes roughly 15 to 30 transactions per second. That ceiling isn't a bug or a shortfall in engineering ambition — it's a structural consequence of how the network maintains security. Every full node validates every transaction. That constraint means the network's total capacity equals the capacity of one node, regardless of how many nodes you add.
Sharding is how Ethereum intends to break that constraint. But the sharding Ethereum is actually building today looks different from what was originally described. The original proposal was execution sharding — multiple chains running in parallel, each processing a subset of transactions. What emerged instead is data sharding — a narrower, cleaner approach designed around rollups. Understanding the difference explains both why sharding is necessary and why it took this particular shape.
Every blockchain faces some version of the same problem: security requires nodes to verify independently, but independent verification means every node does the same work. You can't parallelize this without making tradeoffs.
In Ethereum's case, full nodes download and re-execute every transaction to verify the chain. This is computationally and bandwidth-intensive. It's also the source of the network's security guarantees — no transaction gets into the chain without being independently verified by thousands of nodes globally.
The cost is throughput. A single node can only process so many transactions per second before block validation becomes too slow, blocks get bloated, and the requirements to run a full node become prohibitive for individuals. Ethereum's current blocks are deliberately sized with this constraint in mind.
In a sharded system, the database is split into partitions — each shard holds a portion of the total state and is validated by a subset of nodes. Instead of every node doing everything, work is divided.
Ethereum's original plan was to shard execution: create 64 shard chains, each running in parallel, each with its own transactions and state. The challenge was cross-shard communication — if a transaction touches state on multiple shards, coordinating that cleanly is genuinely hard. The design was complex and high-risk.
Around 2020 to 2021, the thinking shifted. Rollups — Layer 2 networks that process transactions off-chain and post compressed data back to Ethereum for verification — had matured significantly. If rollups could handle execution, Ethereum's own job became narrower: provide cheap, abundant, cryptographically verifiable data space for rollups to post to.
This is the insight that changed the roadmap. Ethereum didn't need to be a faster execution environment. It needed to be reliable, secure data availability infrastructure for rollups handling execution at scale.
Under the rollup-centric model, a network like Arbitrum or Optimism processes transactions off-chain, batches them, compresses them, and posts the compressed data back to Ethereum. Ethereum validators don't re-execute those transactions — they verify that the data was posted correctly and remains available long enough for anyone to reconstruct the rollup's state if needed.
This is significantly less work per transaction than executing everything natively, which is why rollups can offer much lower fees than Ethereum's base layer.
EIP-4844, known as Proto-Danksharding, activated in March 2024. It introduced blobs: a new, cheaper data format specifically for rollup batches. Blobs are posted to Ethereum, held for roughly 18 days, then pruned. They don't persist in the chain permanently, which makes them far cheaper than calldata — the format rollups previously had to use. Rollup transaction fees fell roughly 80–90% after EIP-4844 activated.
Full Danksharding — the complete architecture — would go further. The goal is many more blobs per block, enabled by data availability sampling: a technique where individual nodes can verify that a full blob is available without downloading all of it.
The mechanism uses erasure coding. When a blob is posted, redundant data is added such that if any 50% of the data is available, the full original blob can be reconstructed. Each node downloads a random sample. If samples consistently return, the node can be confident the full data is there — without downloading everything. This means blob capacity can scale without requiring each node to proportionally increase its bandwidth.
The binding constraint isn't storage broadly — it's the verification cost without downloading everything. Data availability sampling addresses this cryptographically, but implementing it correctly requires KZG polynomial commitments and erasure coding at scale. That's non-trivial.
A softer constraint is demand: as of early 2026, blob space isn't consistently full. The current caps — 3 to 6 blobs per block at activation — aren't binding most of the time. The urgency of increasing capacity depends on how quickly rollup adoption fills available space.
There's also a structural question worth noting: some rollups are posting data to alternative data availability layers (Celestia, EigenDA, Avail) rather than Ethereum. If rollup ecosystems migrate substantially to alternative DA layers, the demand driver for Ethereum's own data sharding changes. That's an open design question in the ecosystem, not a resolved one.
Blob count per block increasing above the current cap via hard forks without controversy. Rollup fees remaining materially lower than pre-EIP-4844 levels even during high load. Blob space approaching consistent fullness, creating demand pressure for capacity expansion. Data availability sampling reaching a mainnet-ready EIP stage.
If blob space fills consistently and blob count increases stall in governance, rollup fees could approach pre-EIP-4844 levels — the constraint returns at a higher level. A more structural shift: if rollup ecosystems migrate substantially to alternative data availability layers at scale, Ethereum's sharding roadmap becomes less central to the actual scaling infrastructure. That doesn't invalidate the mechanism, but it changes the answer to who benefits.
Now: EIP-4844 is live. Blob-based data posting is active. Rollup fees are materially lower than before March 2024. Blob space isn't consistently full.
Next: Incremental blob count increases are in development. Continued rollup growth will expand demand for blob space, which determines how urgently capacity needs to increase.
Later: Full Danksharding with data availability sampling is a multi-year timeline. No firm deployment date. It requires simultaneous progress on several Ethereum roadmap items.
This post explains Ethereum's sharding rationale and the data sharding architecture being built. It doesn't cover rollup-specific mechanics in depth, the competing data availability layer ecosystem, or Ethereum's other roadmap items like single-slot finality or account abstraction. The distinction between where rollups post their data — Ethereum vs. alternatives — is a live architectural choice in the ecosystem that this explanation doesn't resolve.
The mechanism works as described. Whether it constitutes a thesis about any particular asset or protocol is outside this scope.




