
Ethereum handles roughly 15 to 30 transactions per second on its base layer. Visa processes around 24,000. That gap isn't a bug waiting to be fixed with better engineering — it's a consequence of deliberate design choices that are hard to undo without trading away the properties that make public blockchains valuable in the first place.
Layer 2 solutions exist because of this structural constraint. Not as a temporary workaround, but as the intended long-term architecture for getting both properties at once: decentralization at the base layer, scale at the execution layer.
To understand why Layer 2s are necessary, you need to understand why simply making Layer 1 blockchains faster is harder than it sounds.
Ethereum and Bitcoin nodes are run by tens of thousands of participants globally. Each node independently validates every transaction. That independence is what makes censorship resistance and trustless settlement possible — no single party can decide which transactions get included or reversed.
But here's the tension: if you want more throughput on the base layer, you need bigger blocks (more transactions per block) or shorter block times (blocks produced more frequently). Both approaches require nodes to process more data. More data means higher hardware requirements. Higher hardware requirements mean fewer people can afford to run a node. Fewer nodes means the network gets more centralized.
This is the blockchain trilemma — you can optimize for decentralization, security, and scalability, but pushing hard on one typically weakens another. Bitcoin's 7 TPS reflects a conscious choice to keep node requirements minimal enough for individuals to participate. Ethereum's throughput is similar, for similar reasons.
You could scale the base layer more aggressively. Some alternative Layer 1s do. But you'd be making a different set of trade-offs, and the resulting network has different properties. That's not a dismissal — it's a description.
The core idea is to move computation off the base layer while preserving the base layer's security for final settlement.
Think of it this way: instead of every transaction happening on Ethereum mainnet (where every node globally validates it), transactions happen in a separate environment where a much smaller set of actors process them. Periodically, a compressed summary of that activity — or a cryptographic proof of its validity — gets posted back to the base layer. Ethereum then settles the state.
The base layer still guarantees finality. It just isn't doing most of the computation.
Rollups are the dominant approach right now. They batch hundreds or thousands of transactions together and post either a fraud proof (optimistic rollups, like Arbitrum and Optimism) or a validity proof (ZK-rollups, like zkSync and StarkNet) back to Ethereum.
Both approaches inherit Ethereum's security for finality. The execution happens elsewhere; the settlement doesn't.
Before rollups were common, using Ethereum during peak activity was genuinely expensive. During DeFi Summer in 2020, a single token swap could cost $50 or more in gas. During the NFT boom in 2021, minting costs hit hundreds of dollars. This wasn't because Ethereum was broken — it was because demand for block space exceeded supply, and users bid up gas accordingly.
Layer 2s break that dynamic. Instead of competing for scarce base-layer block space, users transact in an environment with much higher throughput. The L2 then posts compressed data to Ethereum, which costs far less per transaction than executing each one individually on-chain.
EIP-4844, implemented in March 2024, made this even more dramatic. It introduced a new transaction type — blob transactions — specifically designed for L2 data posting. Blobs offer more space and lower cost than calldata (the previous method). Within weeks of the upgrade, L2 transaction fees dropped by 90%+ on several networks. Arbitrum, Base, and Optimism all saw fees fall to fractions of a cent for most operations.
This is a structural shift, not a temporary reprieve. The base layer is being redesigned to serve as a settlement and data availability layer for L2s, not as the primary execution environment.
Layer 2s aren't without tradeoffs of their own. A few worth keeping in mind:
Sequencer centralization. Most rollups today rely on a single sequencer — an entity that orders transactions before they're batched and posted to L1. If the sequencer goes down, the L2 goes down. If the sequencer acts adversarially, it can censor transactions or extract value through ordering. The major rollup teams acknowledge this as a temporary state; decentralized sequencer designs are in development.
Withdrawal delays. On optimistic rollups, moving assets from L2 back to L1 requires waiting through the dispute window (typically seven days) unless you use a liquidity provider who bridges assets for a fee. ZK-rollups don't have this problem in principle, though proof generation delays add some complexity.
Liquidity fragmentation. With multiple L2s running simultaneously, liquidity for any given asset gets split across environments. A token on Arbitrum isn't automatically available on zkSync. Bridges handle this, but bridges introduce their own security surface.
These constraints are real. They're also being actively worked on across the ecosystem.
The thesis here — that L2s are the right architecture for scaling Ethereum without sacrificing base-layer decentralization — is already partially validated. Rollup TVL has grown significantly since 2022. Base, launched by Coinbase in 2023, now regularly processes more transactions per day than Ethereum mainnet. EIP-4844 made blob space available and fees dropped as expected.
Confirmation of the fuller thesis would look like: sequencer decentralization shipping across major rollups, full danksharding increasing blob capacity meaningfully, L2 activity continuing to grow while Ethereum mainnet gas fees primarily reflect L2 settlement activity rather than user-facing competition.
A few things could shift this picture:
A material security failure at the rollup layer — not a smart contract bug in a protocol running on a rollup, but a failure in the rollup's own settlement mechanism — would call into question whether inheriting L1 security actually works as designed. This hasn't happened, but it's the primary systemic risk.
If an alternative L1 achieves genuine decentralization at high throughput without rollups, and does so durably over time (not just under current load conditions), that would be evidence the trilemma can be escaped differently. Debatable how likely this is; the mechanism constraints are real.
Regulatory prohibition of L2 sequencers — treating them as money transmitters or requiring licensing — could significantly complicate the current architecture, particularly for centralized sequencers.
Now: Rollups are operational and well-capitalized. EIP-4844 is live. Fees on major L2s are low. This architecture is in production, not in proposal.
Next: Sequencer decentralization is the most significant structural development to watch. DVT (distributed validator technology) for validators, and decentralized sequencer designs for rollups, are both in active development. Full danksharding (more blob space) is on Ethereum's roadmap but likely years away.
Later: Full danksharding at scale, combined with decentralized sequencers, is the endgame for this architecture — a base layer providing security and data availability, with high-throughput execution happening in rollups. That's a multi-year roadmap.
This is an explanation of why Layer 2s exist as a structural matter — not a recommendation to use any specific rollup or bridge assets between chains. The security properties of individual L2 implementations vary, and the constraint landscape (particularly around sequencer centralization) is evolving.
The base layer constraint is structural. Layer 2s are the response to that constraint. Whether this architecture resolves all scaling questions for all use cases is a separate question, and the honest answer is: not fully yet.




