Why Layer 2 Solutions Are Necessary

Layer 2 solutions exist because base-layer blockchains face a structural trade-off between decentralization, security, and throughput — and scaling the base layer directly means giving something up. This explains the mechanism and what's changing.
Lewis Jackson
CEO and Founder

Ethereum handles roughly 15 to 30 transactions per second on its base layer. Visa processes around 24,000. That gap isn't a bug waiting to be fixed with better engineering — it's a consequence of deliberate design choices that are hard to undo without trading away the properties that make public blockchains valuable in the first place.

Layer 2 solutions exist because of this structural constraint. Not as a temporary workaround, but as the intended long-term architecture for getting both properties at once: decentralization at the base layer, scale at the execution layer.

The Constraint at the Root

To understand why Layer 2s are necessary, you need to understand why simply making Layer 1 blockchains faster is harder than it sounds.

Ethereum and Bitcoin nodes are run by tens of thousands of participants globally. Each node independently validates every transaction. That independence is what makes censorship resistance and trustless settlement possible — no single party can decide which transactions get included or reversed.

But here's the tension: if you want more throughput on the base layer, you need bigger blocks (more transactions per block) or shorter block times (blocks produced more frequently). Both approaches require nodes to process more data. More data means higher hardware requirements. Higher hardware requirements mean fewer people can afford to run a node. Fewer nodes means the network gets more centralized.

This is the blockchain trilemma — you can optimize for decentralization, security, and scalability, but pushing hard on one typically weakens another. Bitcoin's 7 TPS reflects a conscious choice to keep node requirements minimal enough for individuals to participate. Ethereum's throughput is similar, for similar reasons.

You could scale the base layer more aggressively. Some alternative Layer 1s do. But you'd be making a different set of trade-offs, and the resulting network has different properties. That's not a dismissal — it's a description.

What Layer 2s Actually Do

The core idea is to move computation off the base layer while preserving the base layer's security for final settlement.

Think of it this way: instead of every transaction happening on Ethereum mainnet (where every node globally validates it), transactions happen in a separate environment where a much smaller set of actors process them. Periodically, a compressed summary of that activity — or a cryptographic proof of its validity — gets posted back to the base layer. Ethereum then settles the state.

The base layer still guarantees finality. It just isn't doing most of the computation.

Rollups are the dominant approach right now. They batch hundreds or thousands of transactions together and post either a fraud proof (optimistic rollups, like Arbitrum and Optimism) or a validity proof (ZK-rollups, like zkSync and StarkNet) back to Ethereum.

  • Optimistic rollups assume transactions are valid unless challenged. There's a dispute window — typically seven days — during which anyone can submit a fraud proof if something went wrong. If no challenge comes, the state is accepted.
  • ZK-rollups generate a cryptographic proof that the batch of transactions is valid. This proof can be verified on-chain quickly and cheaply, with no dispute window required. The math guarantees correctness rather than relying on economic incentives.

Both approaches inherit Ethereum's security for finality. The execution happens elsewhere; the settlement doesn't.

The Cost Structure Changes When You Leave the Base Layer

Before rollups were common, using Ethereum during peak activity was genuinely expensive. During DeFi Summer in 2020, a single token swap could cost $50 or more in gas. During the NFT boom in 2021, minting costs hit hundreds of dollars. This wasn't because Ethereum was broken — it was because demand for block space exceeded supply, and users bid up gas accordingly.

Layer 2s break that dynamic. Instead of competing for scarce base-layer block space, users transact in an environment with much higher throughput. The L2 then posts compressed data to Ethereum, which costs far less per transaction than executing each one individually on-chain.

EIP-4844, implemented in March 2024, made this even more dramatic. It introduced a new transaction type — blob transactions — specifically designed for L2 data posting. Blobs offer more space and lower cost than calldata (the previous method). Within weeks of the upgrade, L2 transaction fees dropped by 90%+ on several networks. Arbitrum, Base, and Optimism all saw fees fall to fractions of a cent for most operations.

This is a structural shift, not a temporary reprieve. The base layer is being redesigned to serve as a settlement and data availability layer for L2s, not as the primary execution environment.

The Remaining Constraints

Layer 2s aren't without tradeoffs of their own. A few worth keeping in mind:

Sequencer centralization. Most rollups today rely on a single sequencer — an entity that orders transactions before they're batched and posted to L1. If the sequencer goes down, the L2 goes down. If the sequencer acts adversarially, it can censor transactions or extract value through ordering. The major rollup teams acknowledge this as a temporary state; decentralized sequencer designs are in development.

Withdrawal delays. On optimistic rollups, moving assets from L2 back to L1 requires waiting through the dispute window (typically seven days) unless you use a liquidity provider who bridges assets for a fee. ZK-rollups don't have this problem in principle, though proof generation delays add some complexity.

Liquidity fragmentation. With multiple L2s running simultaneously, liquidity for any given asset gets split across environments. A token on Arbitrum isn't automatically available on zkSync. Bridges handle this, but bridges introduce their own security surface.

These constraints are real. They're also being actively worked on across the ecosystem.

What Would Confirm This Direction

The thesis here — that L2s are the right architecture for scaling Ethereum without sacrificing base-layer decentralization — is already partially validated. Rollup TVL has grown significantly since 2022. Base, launched by Coinbase in 2023, now regularly processes more transactions per day than Ethereum mainnet. EIP-4844 made blob space available and fees dropped as expected.

Confirmation of the fuller thesis would look like: sequencer decentralization shipping across major rollups, full danksharding increasing blob capacity meaningfully, L2 activity continuing to grow while Ethereum mainnet gas fees primarily reflect L2 settlement activity rather than user-facing competition.

What Would Break or Invalidate It

A few things could shift this picture:

A material security failure at the rollup layer — not a smart contract bug in a protocol running on a rollup, but a failure in the rollup's own settlement mechanism — would call into question whether inheriting L1 security actually works as designed. This hasn't happened, but it's the primary systemic risk.

If an alternative L1 achieves genuine decentralization at high throughput without rollups, and does so durably over time (not just under current load conditions), that would be evidence the trilemma can be escaped differently. Debatable how likely this is; the mechanism constraints are real.

Regulatory prohibition of L2 sequencers — treating them as money transmitters or requiring licensing — could significantly complicate the current architecture, particularly for centralized sequencers.

Timing

Now: Rollups are operational and well-capitalized. EIP-4844 is live. Fees on major L2s are low. This architecture is in production, not in proposal.

Next: Sequencer decentralization is the most significant structural development to watch. DVT (distributed validator technology) for validators, and decentralized sequencer designs for rollups, are both in active development. Full danksharding (more blob space) is on Ethereum's roadmap but likely years away.

Later: Full danksharding at scale, combined with decentralized sequencers, is the endgame for this architecture — a base layer providing security and data availability, with high-throughput execution happening in rollups. That's a multi-year roadmap.

Boundary Statement

This is an explanation of why Layer 2s exist as a structural matter — not a recommendation to use any specific rollup or bridge assets between chains. The security properties of individual L2 implementations vary, and the constraint landscape (particularly around sequencer centralization) is evolving.

The base layer constraint is structural. Layer 2s are the response to that constraint. Whether this architecture resolves all scaling questions for all use cases is a separate question, and the honest answer is: not fully yet.

Related Posts

See All
Crypto Research
New XRP-Focused Research Defining the “Velocity Threshold” for Global Settlement and Liquidity
A lot of people looking at my recent research have asked the same question: “Surely Ripple already understands all of this. So what does that mean for XRP?” That question is completely valid — and it turns out it’s the right question to ask. This research breaks down why XRP is unlikely to be the internal settlement asset of CBDC shared ledgers or unified bank platforms, and why that doesn’t mean XRP is irrelevant. Instead, it explains where XRP realistically fits in the system banks are actually building: at the seams, where different rulebooks, platforms, and networks still need to connect. Using liquidity math, system design, and real-world settlement mechanics, this piece explains: why most value settles inside venues, not through bridges why XRP’s role is narrower but more precise than most narratives suggest how velocity (refresh interval) determines whether XRP creates scarcity or just throughput and why Ripple’s strategy makes more sense once you stop assuming XRP must be “the core of everything” This isn’t a bullish or bearish take — it’s a structural one. If you want to understand XRP beyond hype and price targets, this is the question you need to grapple with.
Read Now
Crypto Research
The Jackson Liquidity Framework - Announcement
Lewis Jackson Ventures announces the release of the Jackson Liquidity Framework — the first quantitative, regulator-aligned model for liquidity sizing in AMM-based settlement systems, CBDC corridors, and tokenised financial infrastructures. Developed using advanced stochastic simulations and grounded in Basel III and PFMI principles, the framework provides a missing methodology for determining how much liquidity prefunded AMM pools actually require under real-world flow conditions.
Read Now
Crypto Research
Banks, Stablecoins, and Tokenized Assets
In Episode 011 of The Macro, crypto analyst Lewis Jackson unpacks a pivotal week in global finance — one marked by record growth in tokenized assets, expanding stablecoin adoption across emerging markets, and major institutions deepening their blockchain commitments. This research brief summarises Jackson’s key findings, from tokenized deposits to institutional RWA chains and AI-driven compliance, and explains how these developments signal a maturing, multi-rail settlement architecture spanning Ethereum, XRPL, stablecoin networks, and new interoperability layers.Taken together, this episode marks a structural shift toward programmable finance, instant settlement, and tokenized real-world assets at global scale.
Read Now

Related Posts

See All
No items found.
Lewsletter

Weekly notes on what I’m seeing

A personal letter I send straight to your inbox —reflections on crypto, wealth, time and life.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.