How Rollups Work

Rollups process transactions off Ethereum, batch them, and settle compressed data back to the base layer. This post explains the mechanism — sequencing, batching, proof submission, and what determines finality.
Lewis Jackson
CEO and Founder

Ethereum can process roughly 15–20 transactions per second on its base layer. Under sustained load, that ceiling becomes a cost problem: gas fees spike, users compete for limited block space, and the network slows to a crawl during periods of high demand.

Rollups were designed to address this without modifying Ethereum's core security model. The basic idea is that you don't need Ethereum to execute every transaction — you just need it to verify that a batch of transactions was handled correctly and to store enough data for anyone to reconstruct the state if needed. Rollups handle execution off-chain, then report back.

That's the concept. The mechanism is more involved.

The Sequencer: First Point of Contact

Every rollup runs at least one sequencer — an entity that receives users' transactions, orders them, and executes them against the rollup's current state. Most major rollups today (Arbitrum, Optimism, Base, Scroll, zkSync) operate a single centralized sequencer controlled by the development team.

When you submit a transaction on a rollup, the sequencer is what processes it. It provides a soft confirmation — a pre-commitment to include your transaction — typically within a few seconds. This is why rollup UX feels fast. You're getting the sequencer's promise, not Ethereum's. The Ethereum-level guarantee comes later, once the batch settles.

The centralized sequencer is a known limitation. It can technically censor transactions or go offline, both of which break the system's censorship-resistance guarantees. Force-inclusion mechanisms exist on Optimism and Arbitrum — users can submit transactions directly to the L1 smart contract if the sequencer refuses to include them — but these paths are slower and more expensive, and most users don't know they exist.

Batching and Data Compression

At regular intervals, the sequencer aggregates recent transactions into a batch. The batch contains all the information needed to reconstruct every state change: who sent what, to whom, for how much. This data gets compressed before being posted to Ethereum.

Compression matters more than it might seem. Raw Ethereum transactions carry significant overhead — signatures, addresses, amounts, all encoded in full. Rollup batches strip redundant information and pack multiple transactions together. One transaction's data cost on Ethereum ends up shared across hundreds or thousands of transactions in the same batch.

Before March 2024, this data was posted to Ethereum as calldata — permanent on-chain storage, expensive to include. EIP-4844 changed this by introducing blob-carrying transactions: a new data type designed specifically for rollup batches. Blobs are stored separately from regular transaction data, retained for roughly 18 days (long enough for any dispute resolution), then pruned. The pricing is independent of calldata, and the result has been an 80–90% reduction in rollup data posting costs. That's not a minor efficiency gain — it's a structural change in what rollups cost to operate.

State Roots and the Settlement Mechanism

Alongside the transaction data, the rollup submits a state root to Ethereum. A state root is a cryptographic hash — specifically, a Merkle root — that commits to the entire state of the rollup after applying the batch. Think of it as a fingerprint for the rollup's current state.

The state root gets stored in an Ethereum smart contract. This is what makes rollups different from pure sidechains: the state commitment lives on Ethereum, and Ethereum's security backs it.

What happens next depends on which type of rollup you're using.

Optimistic rollups submit the state root and assume it's correct by default. There's a challenge window — seven days on Arbitrum and Optimism — during which anyone can submit a fraud proof if they believe the sequencer posted an incorrect state root. If no valid challenge is filed within that window, the state root is finalized and withdrawals can proceed. The optimism is literal: the system trusts the sequencer unless challenged.

ZK rollups submit the state root alongside a validity proof — a cryptographic proof (specifically, a zero-knowledge proof) that the state transition was computed correctly. Ethereum verifies this proof before accepting the state root. No challenge window is required because correctness is proven mathematically, not assumed.

This difference has a practical consequence for users: withdrawing from an optimistic rollup back to Ethereum takes seven days. ZK rollup withdrawals finalize as soon as the proof verifies, which can be minutes to hours depending on proof generation time.

Where the Real Constraints Are

There are three constraints worth understanding clearly, because they're often confused.

Data availability is the hardest one. For a rollup to maintain Ethereum-level security, all transaction data must be posted to Ethereum — not just the state root. If data is withheld, no one can construct a fraud proof against an invalid state root, and ZK proofs can't be independently verified. A rollup that posts only state roots and keeps transaction data off-chain is called a validium, not a rollup. It has a weaker security model, and this distinction matters.

Sequencer centralization is the live operational risk. Centralized sequencers are fast and cheap to operate, but they're single points of failure. The fix — decentralized sequencer sets, where multiple independent actors share transaction ordering — is being built by several teams (Espresso Systems, Astria, Radius). None of these are deployed at scale on mainnet yet.

Prover compute is specific to ZK rollups. Generating validity proofs requires significant computation. This is why ZK rollup withdrawal finality, while faster in theory than the 7-day optimistic window, isn't always instantaneous in practice. Proof generation is getting faster as the hardware and software stack matures, but it's still a real cost.

What's Changing

Blob capacity is expanding. EIP-4844 introduced blobs but initially capped how many per block. Ethereum's danksharding roadmap is designed to scale this substantially — eventually, blob throughput should grow by orders of magnitude, which would continue reducing rollup costs as the network scales.

ZK proof generation is getting faster. The gap between optimistic and ZK rollup withdrawal finality is closing, and the gap between ZK-EVM types (custom VMs vs. full Ethereum-equivalent execution) is closing too. Type-1 ZK-EVMs — which prove the exact Ethereum execution environment rather than a custom one — would let existing Ethereum applications migrate to ZK rollups without modification. This is in active development.

The decentralized sequencer problem is getting more attention. It's not solved, but there are credible teams working on it with concrete mechanism designs.

Confirmation Signals

  • Rollup transaction volume growing or sustaining as a share of total Ethereum ecosystem activity
  • Blob fees remaining materially lower than pre-EIP-4844 calldata costs under normal conditions
  • Decentralized sequencer deployments moving from testnet to mainnet with genuine economic participation
  • ZK proof verification times consistently reaching minutes rather than hours for complex batches

Invalidation Signals

  • A proof system exploit that allows an invalid state root to pass verification
  • Data availability failure: transaction data withheld, making independent state reconstruction impossible
  • Centralized sequencer censorship causing sustained user exclusion at scale
  • Regulatory treatment of sequencer operators as money transmitters, creating compliance choke points that undermine permissionlessness

Timing

Now: Rollups are live infrastructure. Most Ethereum DeFi activity runs on rollups today. Sequencer centralization and 7-day withdrawal delays (for optimistic rollups) are active considerations, not hypotheticals.

Next: Decentralized sequencers and blob capacity expansion are engineering projects with 12–24 month horizons. Worth watching.

Later: Full danksharding and mature Type-1 ZK-EVM coverage are multi-year milestones with design dependencies still being resolved.

This post explains the rollup mechanism as it stands. It doesn't assess the relative merit of specific rollup projects or constitute guidance on where to deploy assets. The optimistic and ZK variants each get their own treatment — this post covers the architecture they share. The tracked signals live elsewhere.

Related Posts

See All
Crypto Research
New XRP-Focused Research Defining the “Velocity Threshold” for Global Settlement and Liquidity
A lot of people looking at my recent research have asked the same question: “Surely Ripple already understands all of this. So what does that mean for XRP?” That question is completely valid — and it turns out it’s the right question to ask. This research breaks down why XRP is unlikely to be the internal settlement asset of CBDC shared ledgers or unified bank platforms, and why that doesn’t mean XRP is irrelevant. Instead, it explains where XRP realistically fits in the system banks are actually building: at the seams, where different rulebooks, platforms, and networks still need to connect. Using liquidity math, system design, and real-world settlement mechanics, this piece explains: why most value settles inside venues, not through bridges why XRP’s role is narrower but more precise than most narratives suggest how velocity (refresh interval) determines whether XRP creates scarcity or just throughput and why Ripple’s strategy makes more sense once you stop assuming XRP must be “the core of everything” This isn’t a bullish or bearish take — it’s a structural one. If you want to understand XRP beyond hype and price targets, this is the question you need to grapple with.
Read Now
Crypto Research
The Jackson Liquidity Framework - Announcement
Lewis Jackson Ventures announces the release of the Jackson Liquidity Framework — the first quantitative, regulator-aligned model for liquidity sizing in AMM-based settlement systems, CBDC corridors, and tokenised financial infrastructures. Developed using advanced stochastic simulations and grounded in Basel III and PFMI principles, the framework provides a missing methodology for determining how much liquidity prefunded AMM pools actually require under real-world flow conditions.
Read Now
Crypto Research
Banks, Stablecoins, and Tokenized Assets
In Episode 011 of The Macro, crypto analyst Lewis Jackson unpacks a pivotal week in global finance — one marked by record growth in tokenized assets, expanding stablecoin adoption across emerging markets, and major institutions deepening their blockchain commitments. This research brief summarises Jackson’s key findings, from tokenized deposits to institutional RWA chains and AI-driven compliance, and explains how these developments signal a maturing, multi-rail settlement architecture spanning Ethereum, XRPL, stablecoin networks, and new interoperability layers.Taken together, this episode marks a structural shift toward programmable finance, instant settlement, and tokenized real-world assets at global scale.
Read Now

Related Posts

See All
No items found.
Lewsletter

Weekly notes on what I’m seeing

A personal letter I send straight to your inbox —reflections on crypto, wealth, time and life.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.