Why Smart Contract Audits Matter

An audit badge is often treated as a signal that a protocol is safe. It isn't. This post explains what smart contract audits actually do, where their limits are, and what's changing with formal verification and competitive auditing platforms.
Lewis Jackson
CEO and Founder

An audit badge is often treated as a signal that a protocol is safe. It isn't. The DAO was audited. Euler Finance, which lost $197 million in 2023, had multiple audits. Nomad's bridge — the $190 million one — had also been audited; the vulnerability was introduced by a configuration change made after the fact.

This doesn't mean audits are useless. It means most people misunderstand what audits do. Once you understand the mechanism, an audit badge tells you something specific — just not what most people assume.

What an Audit Actually Is

A smart contract audit is a structured review of code by security specialists, looking for vulnerabilities before that code is deployed to mainnet. The core process combines manual code review with automated analysis tools.

Auditors aren't just reading code hoping to notice something suspicious. They work through a taxonomy of known vulnerability classes: reentrancy attacks, access control failures, integer overflow and underflow, unchecked external calls, price oracle manipulation, flash loan attack surfaces, and more. Each category has specific patterns auditors are trained to recognize.

Automated tools complement the manual work. Slither and Mythril handle static analysis — examining the code's structure without running it. Echidna does fuzzing: instead of reviewing what the code does under expected conditions, it generates thousands of random or edge-case inputs trying to break things unexpectedly. Fuzzing surfaces bugs that manual review misses because no human thinks to try every permutation.

The output is a report with findings sorted by severity: Critical (funds immediately at risk), High (significant vulnerability), Medium, Low, and Informational. A clean audit means no critical or high findings went unresolved. It does not mean no vulnerabilities exist.

Where the Constraints Are

Here's the important structural limitation: auditors are reviewing a snapshot of code at a point in time.

If the protocol deploys a new module after the audit, that module isn't covered. If a configuration parameter changes after the original deployment — as happened with Nomad — the audit doesn't catch it. Scope is defined by what's submitted, and the submitted snapshot isn't necessarily the live system.

There's also a knowledge problem. Auditors excel at catching known vulnerability classes. They've seen reentrancy before; they know where to look. But novel attack vectors — especially emergent behavior that only appears when a protocol interacts with multiple other live protocols simultaneously — are genuinely hard to anticipate. Static code review can't perfectly model every possible runtime interaction across a composable ecosystem. That's not auditor failure. It's a property of complex, interconnected systems.

Auditor quality also varies enormously. Trail of Bits, OpenZeppelin, ChainSecurity, Cantina, Spearbit — these firms have deep research practices, documented track records, and public methodologies. A two-week engagement from a lesser-known shop costs less and finds fewer things. Both technically qualify the protocol as "audited."

Finally: time matters. A four-week engagement finds more bugs than a two-week one. Budget drives scope; scope drives coverage.

What's Actually Changing

Formal verification is a different category of assurance. Rather than reviewing code for patterns, formal verification uses mathematical proofs to guarantee that code satisfies specified properties under all possible inputs. Tools like Certora's Prover and Halmos are seeing adoption at Aave, Compound, and a handful of other protocols with meaningful TVL. The limitation is that specifications have to be written correctly — you can prove code meets a spec while the spec itself is incomplete — but for critical code paths like token accounting or liquidation logic, formal verification catches things no human reviewer will find.

Competitive auditing platforms changed the incentive structure. Code4rena and Sherlock crowdsource reviews to dozens of independent researchers competing for a reward pool. More eyes on the same codebase means more bugs found. Sherlock also offers coverage: if an audited protocol is exploited through a covered vulnerability, it pays out. That aligns auditor incentives with security outcomes, not just report delivery.

Bug bounties extend scrutiny after deployment. Immunefi has processed over $100 million in bounty payouts, with top programs paying up to $10 million for critical findings. White-hat researchers have continuous incentive to probe live code because the reward is real.

What Would Confirm This Is Working

A declining hack rate per dollar of TVL in well-audited protocols over time. Major exploits concentrating consistently in unaudited or post-audit-modified code. Formal verification reaching standard adoption above meaningful TVL thresholds, rather than remaining an expensive edge-case practice.

What Would Indicate Audits Are Insufficient

Novel attack vectors continuing to bypass well-audited code at high rates. Formal verification failing to scale beyond flagship protocols. Immunefi bug bounty programs consistently finding critical flaws that pre-deployment audits missed at significant scale.

The current picture is mixed. Most large exploits target unaudited code or vulnerabilities introduced after audits. But a meaningful number of well-audited protocols have still been compromised — typically through composability interactions that are genuinely hard to model before deployment.

Timing

Now: Audits from reputable firms are standard practice for protocols above meaningful TVL. The badge tells you something — primarily that known vulnerability classes were checked by someone accountable. The question is always audit scope and auditor quality.

Next: Formal verification adoption is the forward signal to watch. If it expands from a handful of flagship protocols to a broader standard above certain TVL thresholds, that changes what "audited" means structurally.

Later: Fully formally-verified, end-to-end proven protocol stacks remain theoretical for complex, composable systems. Whether tooling and economic incentives converge on that is a multi-year open question.

Boundary

This explains the audit mechanism and its limits. It's not an endorsement of any security firm, a recommendation to use any audited protocol, or advice to avoid any unaudited one. Audit status is one input into a risk assessment.

A protocol can hold three audit certificates and still get exploited. An unaudited contract can run safely for years. Audits reduce known, documented risk. They don't eliminate it, and they're only as good as the scope of what was reviewed.

Related Posts

See All
Crypto Research
New XRP-Focused Research Defining the “Velocity Threshold” for Global Settlement and Liquidity
A lot of people looking at my recent research have asked the same question: “Surely Ripple already understands all of this. So what does that mean for XRP?” That question is completely valid — and it turns out it’s the right question to ask. This research breaks down why XRP is unlikely to be the internal settlement asset of CBDC shared ledgers or unified bank platforms, and why that doesn’t mean XRP is irrelevant. Instead, it explains where XRP realistically fits in the system banks are actually building: at the seams, where different rulebooks, platforms, and networks still need to connect. Using liquidity math, system design, and real-world settlement mechanics, this piece explains: why most value settles inside venues, not through bridges why XRP’s role is narrower but more precise than most narratives suggest how velocity (refresh interval) determines whether XRP creates scarcity or just throughput and why Ripple’s strategy makes more sense once you stop assuming XRP must be “the core of everything” This isn’t a bullish or bearish take — it’s a structural one. If you want to understand XRP beyond hype and price targets, this is the question you need to grapple with.
Read Now
Crypto Research
The Jackson Liquidity Framework - Announcement
Lewis Jackson Ventures announces the release of the Jackson Liquidity Framework — the first quantitative, regulator-aligned model for liquidity sizing in AMM-based settlement systems, CBDC corridors, and tokenised financial infrastructures. Developed using advanced stochastic simulations and grounded in Basel III and PFMI principles, the framework provides a missing methodology for determining how much liquidity prefunded AMM pools actually require under real-world flow conditions.
Read Now
Crypto Research
Banks, Stablecoins, and Tokenized Assets
In Episode 011 of The Macro, crypto analyst Lewis Jackson unpacks a pivotal week in global finance — one marked by record growth in tokenized assets, expanding stablecoin adoption across emerging markets, and major institutions deepening their blockchain commitments. This research brief summarises Jackson’s key findings, from tokenized deposits to institutional RWA chains and AI-driven compliance, and explains how these developments signal a maturing, multi-rail settlement architecture spanning Ethereum, XRPL, stablecoin networks, and new interoperability layers.Taken together, this episode marks a structural shift toward programmable finance, instant settlement, and tokenized real-world assets at global scale.
Read Now

Related Posts

See All
No items found.
Lewsletter

Weekly notes on what I’m seeing

A personal letter I send straight to your inbox —reflections on crypto, wealth, time and life.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.