Web Stories Saturday, September 13
Newsletter

Opinion by: Jason Jiang, chief business officer of CertiK

Since its inception, the decentralized finance (DeFi) ecosystem has been defined by innovation, from decentralized exchanges (DEXs) to lending and borrowing protocols, stablecoins and more. 

The latest innovation is DeFAI, or DeFi powered by artificial intelligence. Within DeFAI, autonomous bots trained on large data sets can significantly improve efficiency by executing trades, managing risk and participating in governance protocols. 

As is the case with all blockchain-based innovations, however, DeFAI may also introduce new attack vectors that the crypto community must address to improve user safety. This necessitates an intricate look into the vulnerabilities that come with innovation so as to ensure security. 

DeFAI agents are a step beyond traditional smart contracts 

Within blockchain, most smart contracts have traditionally operated on simple logic. For example, “If X happens, then Y will execute.” Due to their inherent transparency, such smart contracts can be audited and verified. 

DeFAI, on the other hand, pivots from the traditional smart contract structure, as its AI agents are inherently probabilistic. These AI agents make decisions based on evolving data sets, prior inputs and context. They can interpret signals and adapt instead of reacting to a predetermined event. While some might be right to argue that this process delivers sophisticated innovation, it also creates a breeding ground for errors and exploits through its inherent uncertainty. 

Thus far, early iterations of AI-powered trading bots in decentralized protocols have signalled the shift to DeFAI. For instance, users or decentralized autonomous organizations (DAOs) could implement a bot to scan for specific market patterns and execute trades in seconds. As innovative as this may sound, most bots operate on a Web2 infrastructure, bringing to Web3 the vulnerability of a centralized point of failure. 

DeFAI creates new attack surfaces

The industry should not get caught up in the excitement of incorporating AI into decentralized protocols when this shift can create new attack surfaces that it’s not prepared for. Bad actors could exploit AI agents through model manipulation, data poisoning or adversarial input attacks. 

This is exemplified by an AI agent trained to identify arbitrage opportunities between DEXs. 

Related: Decentralized science meets AI — legacy institutions aren’t ready

Threat actors could tamper with its input data, making the agent execute unprofitable trades or even drain funds from a liquidity pool. Moreover, a compromised agent could mislead an entire protocol into believing false information or serve as a starting point for larger attacks. 

These risks are compounded by the fact that most AI agents are currently black boxes. Even for developers, the decision-making abilities of the AI agents they create may not be transparent. 

These features are the opposite of Web3’s ethos, which was built on transparency and verifiability. 

Security is a shared responsibility

With these risks in mind, concerns may be voiced about the implications of DeFAI, potentially even calling for a pause on this development altogether. DeFAI is, however, likely to continue to evolve and see greater levels of adoption. What is needed then is to adapt the industry’s approach to security accordingly. Ecosystems involving DeFAI will likely require a standard security model, where developers, users and third-party auditors determine the best means of maintaining security and mitigating risks. 

AI agents must be treated like any other piece of onchain infrastructure: with skepticism and scrutiny. This entails rigorously auditing their code logic, simulating worst-case scenarios and even using red-team exercises to expose attack vectors before malicious actors can exploit them. Moreover, the industry must develop standards for transparency, such as open-source models or documentation. 

Regardless of how the industry views this shift, DaFAI introduces new questions when it comes to the trust of decentralized systems. When AI agents can autonomously hold assets, interact with smart contracts and vote on governance proposals, trust is no longer just about verifying logic; it’s about verifying intent. This calls for exploring how users can ensure that an agent’s objectives align with short-term and long-term goals. 

Toward secure, transparent intelligence

The path forward should be one of cross-disciplinary solutions. Cryptographic techniques like zero-knowledge proofs could help verify the integrity of AI actions, and onchain attestation frameworks could help trace the origins of decisions. Finally, audit tools with elements of AI could evaluate agents as comprehensively as developers currently review smart contract code. 

The reality remains, however, that the industry is not yet there. For now, rigorous auditing, transparency and stress testing remain the best defense. Users considering participating in DeFAI protocols should verify that the protocols embrace these principles in the AI logic that drives them. 

Securing the future of AI innovation

DeFAI is not inherently unsafe but differs from most of the current Web3 infrastructure. The speed of its adoption risks outpacing the security frameworks the industry currently relies on. As the crypto industry continues to learn — often the hard way — innovation without security is a recipe for disaster. 

Given that AI agents will soon be able to act on users’ behalf, hold their assets and shape protocols, the industry must confront the fact that every line of AI logic is still code, and every line of code can be exploited. 

If the adoption of DeFAI is to take place without compromising safety, it must be designed with security and transparency. Anything less invites the very outcomes decentralization was meant to prevent. 

Opinion by: Jason Jiang, chief business officer of CertiK.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Read the full article here

Share.

Leave A Reply

© 2025 Wuulu. All Rights Reserved.