How Layer 2 Scaling Works

Layer 2 moves computation off the base blockchain while inheriting its security. This explains the core mechanism — rollups, data availability, bridge risk — and what's actually changing with Ethereum's rollup-centric roadmap.
Lewis Jackson
CEO and Founder

There's a hard constraint at the base of every major blockchain: networks that prioritize decentralization and security tend to be slow. Bitcoin processes roughly 7 transactions per second. Ethereum, before its scaling infrastructure matured, managed around 15–30. Visa handles thousands per second at peak.

That gap isn't going away through software patches to the base chain. Closing it fundamentally would require trading away either decentralization or security — the blockchain trilemma describes this constraint. You can optimize for two of the three properties, but not all three simultaneously, at least not without significant trade-offs.

Layer 2 is the class of solutions that attempts to sidestep this trade-off rather than resolve it. The underlying logic: instead of making the base layer (Layer 1) faster, move most computation off the base chain while still using it as the ultimate source of truth.

The Core Mechanism

Layer 2 protocols share one defining property: they inherit security from Layer 1 rather than building a separate security model from scratch.

In practice, this means a Layer 2 takes transactions that would otherwise occur on L1, processes them off-chain (faster, cheaper), and periodically settles the result back to L1. L1 doesn't verify every individual transaction — it verifies a compressed summary or cryptographic proof of the off-chain activity. The key design question in any L2 is: how does L1 know the summary is honest?

Different architectures answer this differently.

Optimistic rollups (Arbitrum, Optimism, Base) assume transactions are valid unless challenged. Compressed transaction data is posted to L1 alongside the proposed state. A challenge window — typically seven days — then opens, during which anyone can submit a fraud proof if they spot invalid state. If no valid challenge arrives, the state finalizes. The name comes from the baked-in assumption: most submitted state is honest, so don't verify everything, just make dishonesty challengeable.

ZK-rollups (zkSync, Starknet, Polygon zkEVM) take the opposite approach. They generate a cryptographic validity proof — a mathematical certificate that the off-chain computation was done correctly — and post this proof to L1 for verification. No challenge window is needed. The math either checks out or it doesn't, and verification is fast. The computational overhead sits on the prover side (generating proofs is expensive), not on L1 or end users.

Both approaches post data to L1. That data-posting step is load-bearing: it's what lets any user reconstruct the L2 state and withdraw funds independently, even if the L2 operator disappears entirely.

Where the Constraints Live

Three distinct constraint types shape how L2 behaves — and conflating them causes most of the confusion.

Execution capacity is where the throughput gain lives. L2 processes transactions without requiring every node in a decentralized network to re-execute every computation. The rollup operator (currently a centralized sequencer in most implementations) handles execution and batches it. This is the source of the speed improvement.

Data availability is the binding cost constraint. Even with computation moved off-chain, L2s must post enough data to L1 that users can independently verify state. Before EIP-4844 (implemented March 2024), this data had to be posted as calldata — expensive, permanently stored on-chain. EIP-4844 introduced blobs: a new data format that's cheaper to post and doesn't need to persist forever. This single change reduced L2 transaction fees by 80–90% on major networks almost immediately. Data availability, not execution, was the cost bottleneck.

Bridge risk is the constraint most people underweight. Moving assets from L1 to an L2 requires a bridge — typically a smart contract that locks assets on L1 and mints a representation on L2. If that contract has a bug or is exploited, you can lose assets even if the L2 mechanism itself is sound. The Ronin bridge hack ($625M, 2022) and Wormhole exploit ($320M, 2022) didn't break any rollup mechanism. They broke the bridge contracts. These are categorically different failure modes, but users routinely conflate them.

What's Changing

Ethereum's roadmap is now explicitly rollup-centric. The core development direction treats L1 as a settlement and data availability layer for L2s — not as a primary execution environment. That's a structural commitment: future L1 upgrades are designed to serve L2s, not compete with them.

EIP-4844 was the first major step. Proto-danksharding and eventual full danksharding will expand L1's blob capacity further, allowing more L2 transaction data per block. More blob space means lower fees for L2 users — a direct, mechanical relationship.

The sequencer centralization question is in motion. Most L2s currently run centralized sequencers: single entities that order transactions and produce batches. This is a practical trade-off — it's faster and easier to upgrade during early deployment. But centralized sequencers represent a censorship vector and a single point of failure. Decentralized sequencer sets are in active development across major L2s; timelines vary and none have shipped at scale yet.

There's also a proliferation happening at the application layer. App-specific L2s are becoming common: Coinbase's Base, Uniswap's Unichain, and others. This creates a fragmented liquidity landscape — assets split across dozens of chains with varying bridge risk profiles. Intent-based routing protocols (CoW Protocol, UniswapX) are emerging to abstract this fragmentation from end users, routing transactions across chains without requiring users to manage bridges manually.

What Would Confirm This Direction

Observable signals worth tracking: sustained growth in L2 transaction volume as a share of total Ethereum ecosystem activity; continued fee reductions as blob capacity expands through protocol upgrades; shipping of decentralized sequencer implementations by Arbitrum or Optimism; growth in L2 TVL without corresponding bridge exploit events at scale.

What Would Break It

A few scenarios would materially change the picture. A large-scale bug in a major rollup's fraud or validity proof system — one causing unrecoverable state loss — would damage trust across the category, not just the affected chain. Sustained bridge exploit frequency at 2022 levels would push institutional users toward L1-only approaches regardless of mechanism soundness. Alternatively, if a high-throughput L1 (Solana is the obvious example) captured enough developer and user activity to make Ethereum L2 scaling strategically less relevant, the rollup-centric roadmap faces competitive pressure it hasn't yet had to address seriously.

Timing Perspective

Now: L2s are live infrastructure carrying substantial volume. Fees are materially lower post-EIP-4844. If you're interacting with Ethereum-based applications, understanding which L2 you're on and what bridge risk you're taking is immediately relevant.

Next: Decentralized sequencers and expanded blob capacity — active development, not yet standard. Worth monitoring if you're tracking L2 trust models specifically.

Later: Full danksharding, which would dramatically expand L1 data availability. This is a multi-year roadmap item. Treating it as a near-term catalyst is premature.

A Boundary Note

This post explains the Layer 2 scaling mechanism. It doesn't assess any specific L2 as an investment, provide guidance on where to deploy capital, or predict which rollup architecture will dominate. The ZK vs. optimistic competition is genuinely unresolved — capable teams are building serious systems on both approaches. The mechanism works as described. What it implies for any particular decision is outside this scope.

Related Posts

See All
Crypto Research
New XRP-Focused Research Defining the “Velocity Threshold” for Global Settlement and Liquidity
A lot of people looking at my recent research have asked the same question: “Surely Ripple already understands all of this. So what does that mean for XRP?” That question is completely valid — and it turns out it’s the right question to ask. This research breaks down why XRP is unlikely to be the internal settlement asset of CBDC shared ledgers or unified bank platforms, and why that doesn’t mean XRP is irrelevant. Instead, it explains where XRP realistically fits in the system banks are actually building: at the seams, where different rulebooks, platforms, and networks still need to connect. Using liquidity math, system design, and real-world settlement mechanics, this piece explains: why most value settles inside venues, not through bridges why XRP’s role is narrower but more precise than most narratives suggest how velocity (refresh interval) determines whether XRP creates scarcity or just throughput and why Ripple’s strategy makes more sense once you stop assuming XRP must be “the core of everything” This isn’t a bullish or bearish take — it’s a structural one. If you want to understand XRP beyond hype and price targets, this is the question you need to grapple with.
Read Now
Crypto Research
The Jackson Liquidity Framework - Announcement
Lewis Jackson Ventures announces the release of the Jackson Liquidity Framework — the first quantitative, regulator-aligned model for liquidity sizing in AMM-based settlement systems, CBDC corridors, and tokenised financial infrastructures. Developed using advanced stochastic simulations and grounded in Basel III and PFMI principles, the framework provides a missing methodology for determining how much liquidity prefunded AMM pools actually require under real-world flow conditions.
Read Now
Crypto Research
Banks, Stablecoins, and Tokenized Assets
In Episode 011 of The Macro, crypto analyst Lewis Jackson unpacks a pivotal week in global finance — one marked by record growth in tokenized assets, expanding stablecoin adoption across emerging markets, and major institutions deepening their blockchain commitments. This research brief summarises Jackson’s key findings, from tokenized deposits to institutional RWA chains and AI-driven compliance, and explains how these developments signal a maturing, multi-rail settlement architecture spanning Ethereum, XRPL, stablecoin networks, and new interoperability layers.Taken together, this episode marks a structural shift toward programmable finance, instant settlement, and tokenized real-world assets at global scale.
Read Now

Related Posts

See All
No items found.
Lewsletter

Weekly notes on what I’m seeing

A personal letter I send straight to your inbox —reflections on crypto, wealth, time and life.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.