Why Some Blockchains Are Faster Than Others

Blockchain speed comes down to three architectural variables: block time, block size, and consensus overhead. Understanding how they interact explains why the speed tradeoffs are real — and what they actually cost.
Lewis Jackson
CEO and Founder

If you've heard that Solana can handle 65,000 transactions per second while Bitcoin manages roughly seven, you've probably wondered why the gap is so large. And if you've spent any time in crypto, you've likely heard that speed isn't free — that faster chains sacrifice something. That's true, but the explanation usually stays vague. "It's less decentralized" gets thrown around without explaining what that actually means in mechanistic terms.

Speed in blockchain systems comes down to a small number of architectural variables. Understanding them explains not just why some chains are faster, but why the tradeoff is real and what it actually costs.

The Three Levers That Control Throughput

Transactions per second (TPS) is the headline metric, but it's derived from two underlying variables: how often new blocks are produced, and how many transactions each block can contain. Speed up either one, and throughput goes up. The third variable — consensus overhead — is what constrains both.

Block time is the interval between new blocks being added to the chain. Bitcoin targets one block every ten minutes. Ethereum post-Merge produces a block every twelve seconds. Solana produces a block approximately every 400 milliseconds. All else equal, shorter block intervals mean transactions confirm faster and more throughput is possible.

Block size (or the equivalent in non-UTXO architectures) caps how many transactions fit in a single block. Bitcoin's block size is famously constrained — this was the central issue in the block size wars of 2017, which ultimately produced Bitcoin Cash as a split that raised the limit. Ethereum doesn't use a fixed transaction count limit but caps the computational work (gas) per block, which effectively limits how many operations can be executed per slot.

So: more frequent blocks, bigger blocks, more transactions per second. That's the arithmetic. The interesting question is why every chain doesn't simply maximize both.

Consensus Overhead Is the Binding Constraint

The reason you can't just crank up block frequency and size is that reaching consensus across a distributed network takes time and bandwidth, and it scales with the number of validators and the amount of data they're exchanging.

In Bitcoin's proof-of-work system, consensus happens through computational competition. Miners race to find a valid hash for the next block; the first to succeed broadcasts it, and the rest of the network validates and accepts it. The ten-minute target isn't arbitrary — it's calibrated to give the winning block enough time to propagate across the global network before another miner finds a competing solution. Shorten the block time significantly and you get orphaned blocks (two valid blocks found simultaneously), which create inconsistency and reduce the effective security of the chain.

Proof-of-stake systems, like Ethereum post-Merge, replace computational competition with validator committees. A randomly selected committee of validators attests to each new block; once enough attestations accumulate, the block is finalized. This is faster than proof-of-work because there's no race — you're waiting for network messages, not compute time. But coordination still takes real time. Ethereum's twelve-second slots are partly determined by how long it takes validator attestations to propagate across a globally distributed validator set.

Solana takes a different approach. It uses a mechanism called Proof of History — a cryptographic timestamp sequence built into the chain itself — that lets validators agree on ordering without extensive back-and-forth communication. Combined with a high-bandwidth requirement for validators (running a Solana validator requires serious hardware and a fast internet connection), this compresses consensus time dramatically. The tradeoff is that validator requirements are demanding enough to meaningfully restrict who can run one.

This is where the decentralization cost becomes concrete. It's not a vague philosophical concern. It's that raising hardware and bandwidth requirements for validators reduces the number of people and institutions willing or able to run them. Fewer validators means the network is easier to capture — either by a coordinated group, or simply by regulatory action targeting the concentrated set of participants.

The Trilemma in Practice

The "blockchain trilemma" — the claim that decentralization, security, and scalability can't be maximized simultaneously — is a rough heuristic, not a mathematical proof. But it captures something real. The architectural choices that drive speed (higher validator requirements, fewer validators, smaller committee sizes) tend to reduce the breadth of participation, which has consequences for censorship resistance and trust assumptions.

Bitcoin prioritizes decentralization and security at the cost of throughput. The design intentionally keeps node operation accessible — you can run a full node on consumer hardware — which contributes to a broad and difficult-to-coerce validator set. The cost is that base-layer Bitcoin is genuinely slow for payments.

Solana prioritizes throughput and accepts a more concentrated validator set. Outages in 2022 (multiple times), caused by network congestion and validator software bugs, illustrated what that concentration can mean in practice — coordinated restarts required communication between major validators, which wouldn't be necessary in a more distributed system.

Ethereum sits somewhere in the middle, which is partly why the scaling roadmap focused on Layer 2 rollups rather than raising the base-layer block size. Increasing base-layer throughput would have required either raising validator requirements (reducing decentralization) or shortening slot times (increasing orphan risk). Instead, execution was moved off-chain.

What's Changing

The most consequential shift in blockchain speed isn't happening at the base layer — it's happening one layer up. Layer 2 rollups (Arbitrum, Optimism, Base, zkSync) batch thousands of transactions off-chain and post compressed proofs or state updates to Ethereum, inheriting its security while achieving much higher throughput at lower cost. Ethereum's base layer processes the settlement; the rollup handles execution.

This architectural choice separates the problem of throughput from the problem of security and decentralization. Base-layer Ethereum doesn't need to be fast because rollups handle volume; it needs to be secure and censorship-resistant, which is what its current validator distribution and hardware requirements are calibrated for.

Ethereum's roadmap also includes danksharding — a mechanism for dramatically increasing the amount of data available to rollups per block without requiring every node to store or process all of it. Early-stage implementation of this (via EIP-4844, which introduced "blobs" in March 2024) has already reduced rollup transaction costs meaningfully.

What Would Confirm or Break the Direction

Confirmation looks like: rollup throughput continuing to grow while base-layer security metrics (validator count, geographic distribution, client diversity) remain healthy or improve; danksharding phases shipping on schedule; no base-layer outages caused by consensus failures.

Invalidation looks like: a coordinated attack or successful censorship event on a high-TPS chain that demonstrates the concrete cost of validator concentration; a rollup bridge exploit that undermines the trust assumptions of the Layer 2 model; base-layer Ethereum experiencing degraded liveness due to validator incentive problems.

Timing Perspective

The base-layer speed architecture of major chains is largely stable now. Bitcoin's ten-minute blocks aren't changing. Ethereum's twelve-second slots are unlikely to change meaningfully without significant protocol work. What's actively developing — and worth watching closely over the next twelve to twenty-four months — is the rollup ecosystem and Ethereum's data availability roadmap. The speed story in 2025 and 2026 is mostly a Layer 2 story.

Solana's high-throughput, high-requirements model is worth monitoring for two signals: sustained uptime across high-load periods (the 2022 outages were real and structural), and whether validator concentration changes as the network matures.

What This Doesn't Mean

Transaction speed is one dimension of a blockchain's design. It doesn't tell you whether a chain is decentralized enough for a given use case, whether its security model is adequate, or whether it will survive a serious adversarial scenario. Faster isn't better in the abstract — it's better for specific applications at specific tradeoffs.

This post explains the mechanism. It doesn't constitute a recommendation about which chain to build on or use.

Related Posts

See All
Crypto Research
New XRP-Focused Research Defining the “Velocity Threshold” for Global Settlement and Liquidity
A lot of people looking at my recent research have asked the same question: “Surely Ripple already understands all of this. So what does that mean for XRP?” That question is completely valid — and it turns out it’s the right question to ask. This research breaks down why XRP is unlikely to be the internal settlement asset of CBDC shared ledgers or unified bank platforms, and why that doesn’t mean XRP is irrelevant. Instead, it explains where XRP realistically fits in the system banks are actually building: at the seams, where different rulebooks, platforms, and networks still need to connect. Using liquidity math, system design, and real-world settlement mechanics, this piece explains: why most value settles inside venues, not through bridges why XRP’s role is narrower but more precise than most narratives suggest how velocity (refresh interval) determines whether XRP creates scarcity or just throughput and why Ripple’s strategy makes more sense once you stop assuming XRP must be “the core of everything” This isn’t a bullish or bearish take — it’s a structural one. If you want to understand XRP beyond hype and price targets, this is the question you need to grapple with.
Read Now
Crypto Research
The Jackson Liquidity Framework - Announcement
Lewis Jackson Ventures announces the release of the Jackson Liquidity Framework — the first quantitative, regulator-aligned model for liquidity sizing in AMM-based settlement systems, CBDC corridors, and tokenised financial infrastructures. Developed using advanced stochastic simulations and grounded in Basel III and PFMI principles, the framework provides a missing methodology for determining how much liquidity prefunded AMM pools actually require under real-world flow conditions.
Read Now
Crypto Research
Banks, Stablecoins, and Tokenized Assets
In Episode 011 of The Macro, crypto analyst Lewis Jackson unpacks a pivotal week in global finance — one marked by record growth in tokenized assets, expanding stablecoin adoption across emerging markets, and major institutions deepening their blockchain commitments. This research brief summarises Jackson’s key findings, from tokenized deposits to institutional RWA chains and AI-driven compliance, and explains how these developments signal a maturing, multi-rail settlement architecture spanning Ethereum, XRPL, stablecoin networks, and new interoperability layers.Taken together, this episode marks a structural shift toward programmable finance, instant settlement, and tokenized real-world assets at global scale.
Read Now

Related Posts

See All
No items found.
Lewsletter

Weekly notes on what I’m seeing

A personal letter I send straight to your inbox —reflections on crypto, wealth, time and life.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.