Charles Guillemet says artificial intelligence is fundamentally transforming the threat landscape for digital asset security, making vulnerability discovery and exploitation accessible to a far wider pool of attackers.
Artificial intelligence is fundamentally lowering the cost and technical barrier to executing cyberattacks against cryptocurrency platforms, according to Ledger chief technology officer Charles Guillemet, who warns that the industry faces a structural shift in its threat landscape that existing security practices are not equipped to handle. In an interview with CoinDesk published on 5 April 2026, Guillemet said that AI tools have made finding and exploiting smart contract vulnerabilities 'really, really easy,' adding bluntly: 'The cost is going down to zero.'
The warning comes as total crypto losses from hacks and exploits in 2026 have already surpassed $2.17 billion, according to blockchain security firm Chainalysis — a figure that puts the year on pace to rival 2022's record $3.8 billion in stolen funds. The most prominent recent incident, the $285 million exploit of Solana-based derivatives exchange Drift Protocol on 1 April, underscored Guillemet's central argument: the attack exploited governance and operational vulnerabilities rather than novel cryptographic weaknesses, the precise category of flaws that AI tools are best equipped to discover.
How AI Changes the Attack Economics
Guillemet's concern is not that AI enables entirely new categories of attack but that it dramatically reduces the resources required to execute existing ones. Historically, exploiting a DeFi protocol required deep expertise in Solidity or Rust, familiarity with EVM or SVM internals, and often weeks of manual code review to identify exploitable logic flaws. Large language models and specialised code analysis tools have compressed that timeline to hours or even minutes.
Security researchers at Trail of Bits have documented cases where AI-assisted fuzzing tools identified critical vulnerabilities in audited smart contracts within 30 minutes — flaws that human auditors had missed during multi-week review engagements. The implication is that the asymmetry between attackers and defenders, already tilted in favour of the former, is widening further. 'The attacker only needs to find one vulnerability,' Guillemet noted. 'The defender needs to secure every line of code, every configuration parameter, every governance process.'
The economics are stark. A sophisticated smart contract audit from a top-tier firm typically costs between $200,000 and $500,000 and takes four to eight weeks. An AI-powered scanning tool can cover the same codebase in a fraction of the time at negligible cost — and the same tools are available to attackers. Immunefi, the leading bug bounty platform for DeFi protocols, reported a 340 per cent increase in vulnerability submissions in Q1 2026 compared to the same period last year, a surge it attributes partly to researchers using AI tools to accelerate discovery.
The Drift Exploit as Case Study
The Drift Protocol hack illustrates the evolving threat pattern. Investigators, including Drift's own incident response team, traced the attack to a North Korean-linked operation that had been active for six months prior to the exploit. The attackers did not break any cryptographic primitives. Instead, they exploited a multisig wallet migration — changed from a 3-of-5 to a 2-of-5 threshold without a timelock weeks before the attack — and manipulated price oracle feeds on a newly launched, illiquid lending market.
These are precisely the types of governance and configuration vulnerabilities that AI tools excel at identifying. Automated analysis can scan governance proposals, multisig configurations, and oracle dependencies across hundreds of protocols simultaneously, flagging anomalies that would take human analysts days to catalogue. The concern, shared by multiple security firms interviewed for this article, is that state-sponsored groups are already deploying such tools at scale.
Mitchell Amador, founder of Immunefi, said his firm has observed 'a clear sophistication upgrade' in exploit attempts over the past six months. 'The reconnaissance phase is getting shorter, the attack vectors are more creative, and the speed from vulnerability discovery to exploit deployment has accelerated significantly,' Amador said.
AI-Generated Code Compounds the Problem
The threat extends beyond vulnerability discovery. An increasing proportion of smart contract code deployed to production blockchains is now written or substantially assisted by AI coding tools, including GitHub Copilot, Cursor, and specialised Solidity assistants. While these tools accelerate development, they also introduce subtle vulnerabilities that human reviewers may not catch, particularly in edge cases involving complex financial logic.
A February 2026 study by blockchain security firm Halborn found that AI-generated Solidity code contained exploitable vulnerabilities at a rate 2.4 times higher than human-written code of equivalent complexity. The vulnerabilities were not random bugs but systematic patterns — incorrect reentrancy guards, missing access controls, and flawed arithmetic in fee calculations — that reflected gaps in the training data used by the AI models.
Guillemet described this as a 'double-edged sword': AI simultaneously makes it easier to write code and easier to find the flaws in that code. The net effect, he argued, favours attackers because the volume of deployed vulnerable code is growing faster than the industry's capacity to audit it.
The Case for Hardware Security and Formal Verification
Guillemet's prescription centres on two defensive strategies: formal verification and hardware-based security. Formal verification — the mathematical proof that a piece of code behaves exactly as specified under all possible inputs — remains the gold standard for smart contract security but is rarely employed due to its cost and complexity. Guillemet argued that AI could also be applied to verification, potentially making it economically viable for a broader range of protocols.
For individual users, Guillemet's recommendation is more direct: move assets into cold storage secured by hardware wallets. 'You can't trust most of the systems that you use,' he said, noting that even well-audited protocols can be compromised through governance manipulation or supply chain attacks. Hardware wallets, which keep private keys offline and require physical confirmation for transactions, remain the most resistant defence against remote exploitation.
Industry Response and the Road Ahead
The crypto security industry is responding, albeit unevenly. OpenZeppelin has launched an AI-powered monitoring service that provides real-time anomaly detection for DeFi protocols. Chainalysis has integrated machine learning models into its transaction surveillance tools to flag suspicious fund flows more quickly. And several major protocols, including Aave and Uniswap, have increased their bug bounty caps to attract the growing pool of AI-augmented security researchers.
Whether these measures are sufficient remains an open question. The fundamental challenge, as Guillemet framed it, is that AI is a general-purpose tool that benefits both sides of the security equation — but the structural advantages accrue disproportionately to attackers. With crypto losses already past $2 billion and the year not yet half over, the industry's ability to adapt its security practices to the AI era may determine whether decentralised finance can sustain the institutional trust it has spent years building.