AI Isn’t “Automatically Hacking” DeFi — But It Is Changing the Threat Model

AI Isn’t “Automatically Hacking” DeFi — But It Is Changing the Threat Model

The recent wave of protocol exploits has fueled a growing narrative that AI is now automatically hacking smart contracts. That framing is exaggerated, but it is not entirely wrong either. The real shift is not that AI can independently discover highly sophisticated exploit chains across complex DeFi systems. What AI currently changes is scale.

Large numbers of deployed contracts can now be scanned rapidly for simple, historically exploitable mistakes. Missing access controls, unsafe admin functions, broken initialization logic, exposed privileged roles, weak accounting assumptions, and flawed architectural decisions have existed since the early days of smart contracts. The difference now is that deployed code is no longer protected by obscurity or low visibility.

If meaningful value is deployed on-chain, it will eventually be scanned.

Many recent exploits are not examples of attackers suddenly becoming dramatically more advanced. In many cases, they involve old mistakes being identified and exploited much faster than before. A weak admin path or missing validation check that might once have survived unnoticed for months can now be surfaced almost immediately through AI-assisted analysis.

This changes the baseline threat model for protocols. Vulnerability discovery has become cheaper, faster, and significantly more scalable. Low-quality deployments are increasingly exposed the moment they go live, especially large codebases where the probability of overlooked flaws naturally increases.

At the same time, AI has not replaced deep security research or experienced auditors. The most dangerous vulnerabilities in DeFi are rarely isolated coding mistakes. They emerge from architecture, economic assumptions, cross-contract interactions, privilege boundaries, upgrade mechanisms, and edge-case behavior.

That is why audits are becoming more important, not less.

AI can help surface suspicious patterns and automate large-scale scanning, but understanding whether a protocol is fundamentally secure still requires rigorous human analysis. The role of security review is shifting from simply catching obvious bugs to validating entire systems under increasingly hostile conditions.

AI-assisted tooling raises the floor for attackers by making shallow vulnerability discovery easier and more scalable. Security teams and auditors now have to raise the floor for protocols in response.

The industry is entering a phase where “good enough” security may no longer survive exposure to automated analysis. Smart contract security is no longer just about whether code functions correctly. It is increasingly about whether the assumptions behind that code can survive constant machine-assisted scrutiny at scale.

Continue reading