The Velocity Threshold

Refresh Interval Economics for Fragmented AMM Settlement Networks

Instant settlement promises safer, faster financial systems — but it comes with a hidden cost.

When liquidity must be pre-funded and fragmented across many pools, total capital requirements can explode.This research introduces the Velocity Threshold: the refresh interval a system must sustain in order to make fragmented, hub-based liquidity architectures economically viable.

The result is a hard engineering constraint — not a narrative, not a belief — that governs whether bridge-asset architectures scale or collapse.

Introduction

Why This Research Exists

Most discussions around AMMs, CBDCs, and bridge assets assume that “speed” is always good and that liquidity can simply be recycled faster as systems mature.
But no one has formally answered a more basic question:The Jackson Liquidity Framework was created to answer the biggest unanswered question in tokenised finance and CBDC settlement:

How fast does liquidity actually need to move to prevent fragmentation from becoming unfinanceable?

The Velocity Threshold paper exists to answer that question quantitatively, by extending the Jackson Liquidity Framework from a single-pool model to a network-level system with many pools and seams

The Problem No One Had Solved

Fragmentation Doesn’t Scale Gracefully

If every asset must trade directly with every other asset, the number of liquidity pools grows quadratically. Hub architectures reduce this to linear growth — but linear growth is still expensive when liquidity must be pre-funded in parallel.

Even with a hub, thousands of pools still require thousands of buffers.

Prior research showed that fragmentation increases required liquidity dramatically.
What it did not show was whether velocity could realistically offset that cost — and under what conditions.That missing piece is what this paper resolves.

The Core Dynamic

What Actually Scales With Time — and What Doesn’t

The Jackson Liquidity Framework defines total liquidity requirements as the maximum of several constraints:
- slippage tolerance (size-based)
- directional flow Value-at-Risk
- intraday peak depletion
- Basel-aligned buffers

Critically, not all of these respond to velocity in the same way.

Time-based buffers inherit a √T scaling from variance and stability conditions. Slippage constraints do not.

This creates two fundamentally different regimes:
- a buffer-limited regime, where faster refresh materially reduces required liquidity
- a slippage-limited regime, where speed no longer helps

This distinction turns out to be decisive.
The Hard Limit

Why Velocity Stops Helping

Speed does not eliminate slippage.

When trade-size constraints dominate, liquidity requirements hit a floor that velocity cannot compress further.

The paper shows explicitly that once the slippage constraint binds, faster refresh yields diminishing — and eventually zero — benefit.

This is the critical reason why:
“Speed alone eradicates fragmentation” is directionally true but mechanically incomplete.

Any viable architecture must answer both:

- how fast liquidity can be reused
- and how trade size is controlled or netted
WHAT THIS RESEARCH CHANGES

The Velocity Threshold reframes several common assumptions:

- Fragmentation is not merely inefficient — it is mathematically constrained
- Speed is not universally helpful — it only helps in specific regimes
- Bridge-asset economics cannot be inferred from adoption alone