If you've heard that Solana can handle 65,000 transactions per second while Bitcoin manages roughly seven, you've probably wondered why the gap is so large. And if you've spent any time in crypto, you've likely heard that speed isn't free — that faster chains sacrifice something. That's true, but the explanation usually stays vague. "It's less decentralized" gets thrown around without explaining what that actually means in mechanistic terms.
Speed in blockchain systems comes down to a small number of architectural variables. Understanding them explains not just why some chains are faster, but why the tradeoff is real and what it actually costs.
Transactions per second (TPS) is the headline metric, but it's derived from two underlying variables: how often new blocks are produced, and how many transactions each block can contain. Speed up either one, and throughput goes up. The third variable — consensus overhead — is what constrains both.
Block time is the interval between new blocks being added to the chain. Bitcoin targets one block every ten minutes. Ethereum post-Merge produces a block every twelve seconds. Solana produces a block approximately every 400 milliseconds. All else equal, shorter block intervals mean transactions confirm faster and more throughput is possible.
Block size (or the equivalent in non-UTXO architectures) caps how many transactions fit in a single block. Bitcoin's block size is famously constrained — this was the central issue in the block size wars of 2017, which ultimately produced Bitcoin Cash as a split that raised the limit. Ethereum doesn't use a fixed transaction count limit but caps the computational work (gas) per block, which effectively limits how many operations can be executed per slot.
So: more frequent blocks, bigger blocks, more transactions per second. That's the arithmetic. The interesting question is why every chain doesn't simply maximize both.
The reason you can't just crank up block frequency and size is that reaching consensus across a distributed network takes time and bandwidth, and it scales with the number of validators and the amount of data they're exchanging.
In Bitcoin's proof-of-work system, consensus happens through computational competition. Miners race to find a valid hash for the next block; the first to succeed broadcasts it, and the rest of the network validates and accepts it. The ten-minute target isn't arbitrary — it's calibrated to give the winning block enough time to propagate across the global network before another miner finds a competing solution. Shorten the block time significantly and you get orphaned blocks (two valid blocks found simultaneously), which create inconsistency and reduce the effective security of the chain.
Proof-of-stake systems, like Ethereum post-Merge, replace computational competition with validator committees. A randomly selected committee of validators attests to each new block; once enough attestations accumulate, the block is finalized. This is faster than proof-of-work because there's no race — you're waiting for network messages, not compute time. But coordination still takes real time. Ethereum's twelve-second slots are partly determined by how long it takes validator attestations to propagate across a globally distributed validator set.
Solana takes a different approach. It uses a mechanism called Proof of History — a cryptographic timestamp sequence built into the chain itself — that lets validators agree on ordering without extensive back-and-forth communication. Combined with a high-bandwidth requirement for validators (running a Solana validator requires serious hardware and a fast internet connection), this compresses consensus time dramatically. The tradeoff is that validator requirements are demanding enough to meaningfully restrict who can run one.
This is where the decentralization cost becomes concrete. It's not a vague philosophical concern. It's that raising hardware and bandwidth requirements for validators reduces the number of people and institutions willing or able to run them. Fewer validators means the network is easier to capture — either by a coordinated group, or simply by regulatory action targeting the concentrated set of participants.
The "blockchain trilemma" — the claim that decentralization, security, and scalability can't be maximized simultaneously — is a rough heuristic, not a mathematical proof. But it captures something real. The architectural choices that drive speed (higher validator requirements, fewer validators, smaller committee sizes) tend to reduce the breadth of participation, which has consequences for censorship resistance and trust assumptions.
Bitcoin prioritizes decentralization and security at the cost of throughput. The design intentionally keeps node operation accessible — you can run a full node on consumer hardware — which contributes to a broad and difficult-to-coerce validator set. The cost is that base-layer Bitcoin is genuinely slow for payments.
Solana prioritizes throughput and accepts a more concentrated validator set. Outages in 2022 (multiple times), caused by network congestion and validator software bugs, illustrated what that concentration can mean in practice — coordinated restarts required communication between major validators, which wouldn't be necessary in a more distributed system.
Ethereum sits somewhere in the middle, which is partly why the scaling roadmap focused on Layer 2 rollups rather than raising the base-layer block size. Increasing base-layer throughput would have required either raising validator requirements (reducing decentralization) or shortening slot times (increasing orphan risk). Instead, execution was moved off-chain.
The most consequential shift in blockchain speed isn't happening at the base layer — it's happening one layer up. Layer 2 rollups (Arbitrum, Optimism, Base, zkSync) batch thousands of transactions off-chain and post compressed proofs or state updates to Ethereum, inheriting its security while achieving much higher throughput at lower cost. Ethereum's base layer processes the settlement; the rollup handles execution.
This architectural choice separates the problem of throughput from the problem of security and decentralization. Base-layer Ethereum doesn't need to be fast because rollups handle volume; it needs to be secure and censorship-resistant, which is what its current validator distribution and hardware requirements are calibrated for.
Ethereum's roadmap also includes danksharding — a mechanism for dramatically increasing the amount of data available to rollups per block without requiring every node to store or process all of it. Early-stage implementation of this (via EIP-4844, which introduced "blobs" in March 2024) has already reduced rollup transaction costs meaningfully.
Confirmation looks like: rollup throughput continuing to grow while base-layer security metrics (validator count, geographic distribution, client diversity) remain healthy or improve; danksharding phases shipping on schedule; no base-layer outages caused by consensus failures.
Invalidation looks like: a coordinated attack or successful censorship event on a high-TPS chain that demonstrates the concrete cost of validator concentration; a rollup bridge exploit that undermines the trust assumptions of the Layer 2 model; base-layer Ethereum experiencing degraded liveness due to validator incentive problems.
The base-layer speed architecture of major chains is largely stable now. Bitcoin's ten-minute blocks aren't changing. Ethereum's twelve-second slots are unlikely to change meaningfully without significant protocol work. What's actively developing — and worth watching closely over the next twelve to twenty-four months — is the rollup ecosystem and Ethereum's data availability roadmap. The speed story in 2025 and 2026 is mostly a Layer 2 story.
Solana's high-throughput, high-requirements model is worth monitoring for two signals: sustained uptime across high-load periods (the 2022 outages were real and structural), and whether validator concentration changes as the network matures.
Transaction speed is one dimension of a blockchain's design. It doesn't tell you whether a chain is decentralized enough for a given use case, whether its security model is adequate, or whether it will survive a serious adversarial scenario. Faster isn't better in the abstract — it's better for specific applications at specific tradeoffs.
This post explains the mechanism. It doesn't constitute a recommendation about which chain to build on or use.




