Original article by Odaily Planet Daily (Azuma) Anthropic, a leading AI company and developer of the Claude LLM model, today announced a test that uses AI to autonomously attack smart contracts (Note: Anthropic was invested in by FTX, and theoretically its equity value is now enough to cover the FTX asset vulnerabilities, but it was sold off at a low price by the bankruptcy administration team). The final test results show that profitable and reusable AI autonomous attacks are technically feasible. It's important to note that Anthropic's experiments were conducted only in a simulated blockchain environment and were not tested on a real blockchain, therefore they did not affect any real-world assets. Below, we will briefly introduce the Anthropic testing scheme. Anthropic first built a smart contract exploitation benchmark (SCONE-bench), the first benchmark in history to measure the exploitation capabilities of AI agents by simulating the total value of stolen funds. That is, the benchmark does not rely on vulnerability bounties or speculative models, but directly quantifies the loss and assesses the capability through changes in on-chain assets. SCONE-bench uses 405 real contracts that were attacked between 2020 and 2025 as a test set, located on three EVM chains: Ethereum, BSC, and Base. For each target contract, an AI Agent running in a sandbox environment attempts to attack the specified contract within a limited time (60 minutes) using tools exposed by the Model Context Protocol (MCP). To ensure the reproducibility of results, Anthropic built an evaluation framework that uses Docker containers for sandboxing and scalable execution. Each container runs a local blockchain forked at a specific block height. The following are the test results of Anthropic for different scenarios. First, Anthropic evaluated the performance of 10 models—Llama 3, GPT-4o, DeepSeek V3, Sonnet 3.7, o3, Opus 4, Opus 4.1, GPT-5, Sonnet 4.5, and Opus 4.5—on all 405 benchmark vulnerable contracts. Overall, these models generated ready-to-use exploit scripts for 207 of them (51.11%), simulating the theft of $550.1 million. Secondly, to control for potential data contamination, Anthropic evaluated 34 contracts attacked after March 1, 2025, using the same 10 models—this date was chosen because March 1st is the knowledge expiration date for these models. Overall, Opus 4.5, Sonnet 4.5, and GPT-5 successfully exploited 19 of them (55.8%), simulating a maximum theft of $4.6 million; the best-performing model, Opus 4.5, successfully exploited 17 of them (50%), simulating a theft of $4.5 million. Finally, to evaluate the AI Agent's ability to discover new zero-day vulnerabilities, Anthropic had Sonnet 4.5 and GPT-5 evaluate 2,849 recently deployed contracts with no known vulnerabilities on October 3, 2025. Each AI Agent discovered two new zero-day vulnerabilities and generated attack schemes worth $3,694, with GPT-5's API costing $3,476. This demonstrates that profitable, real-world reusable AI-driven attacks are technically feasible. After Anthropic released its test results, many well-known figures in the industry, including Haseeb, managing partner of Dragonfly, marveled at the astonishing speed at which AI has progressed from theory to practical application. But just how fast is this speed? Anthropic has also provided the answer. In its test conclusion, Anthropic stated that in just one year, the percentage of vulnerabilities that AI could exploit in this benchmark test skyrocketed from 2% to 55.88%, and the amount of money that could be stolen surged from $5,000 to $4.6 million. Anthropic also found that the value of potential exploitable vulnerabilities roughly doubles every 1.3 months, while the cost of tokens decreases by about 23% every 2 months—in the experiment, the average cost of having an AI agent perform an exhaustive vulnerability scan of a smart contract is currently only $1.22. Anthropic states that in 2025, over half of all real attacks on the blockchain—presumably carried out by skilled human attackers—could have been accomplished entirely autonomously by existing AI agents. As costs decrease and capabilities compound, the window of opportunity before vulnerable contracts are exploited after deployment on the chain will continue to shrink, leaving developers with less and less time for vulnerability detection and patching… AI can be used to exploit vulnerabilities, but it can also be used to patch them. Security professionals need to update their understanding; it's time to leverage AI for defense.Original article by Odaily Planet Daily (Azuma) Anthropic, a leading AI company and developer of the Claude LLM model, today announced a test that uses AI to autonomously attack smart contracts (Note: Anthropic was invested in by FTX, and theoretically its equity value is now enough to cover the FTX asset vulnerabilities, but it was sold off at a low price by the bankruptcy administration team). The final test results show that profitable and reusable AI autonomous attacks are technically feasible. It's important to note that Anthropic's experiments were conducted only in a simulated blockchain environment and were not tested on a real blockchain, therefore they did not affect any real-world assets. Below, we will briefly introduce the Anthropic testing scheme. Anthropic first built a smart contract exploitation benchmark (SCONE-bench), the first benchmark in history to measure the exploitation capabilities of AI agents by simulating the total value of stolen funds. That is, the benchmark does not rely on vulnerability bounties or speculative models, but directly quantifies the loss and assesses the capability through changes in on-chain assets. SCONE-bench uses 405 real contracts that were attacked between 2020 and 2025 as a test set, located on three EVM chains: Ethereum, BSC, and Base. For each target contract, an AI Agent running in a sandbox environment attempts to attack the specified contract within a limited time (60 minutes) using tools exposed by the Model Context Protocol (MCP). To ensure the reproducibility of results, Anthropic built an evaluation framework that uses Docker containers for sandboxing and scalable execution. Each container runs a local blockchain forked at a specific block height. The following are the test results of Anthropic for different scenarios. First, Anthropic evaluated the performance of 10 models—Llama 3, GPT-4o, DeepSeek V3, Sonnet 3.7, o3, Opus 4, Opus 4.1, GPT-5, Sonnet 4.5, and Opus 4.5—on all 405 benchmark vulnerable contracts. Overall, these models generated ready-to-use exploit scripts for 207 of them (51.11%), simulating the theft of $550.1 million. Secondly, to control for potential data contamination, Anthropic evaluated 34 contracts attacked after March 1, 2025, using the same 10 models—this date was chosen because March 1st is the knowledge expiration date for these models. Overall, Opus 4.5, Sonnet 4.5, and GPT-5 successfully exploited 19 of them (55.8%), simulating a maximum theft of $4.6 million; the best-performing model, Opus 4.5, successfully exploited 17 of them (50%), simulating a theft of $4.5 million. Finally, to evaluate the AI Agent's ability to discover new zero-day vulnerabilities, Anthropic had Sonnet 4.5 and GPT-5 evaluate 2,849 recently deployed contracts with no known vulnerabilities on October 3, 2025. Each AI Agent discovered two new zero-day vulnerabilities and generated attack schemes worth $3,694, with GPT-5's API costing $3,476. This demonstrates that profitable, real-world reusable AI-driven attacks are technically feasible. After Anthropic released its test results, many well-known figures in the industry, including Haseeb, managing partner of Dragonfly, marveled at the astonishing speed at which AI has progressed from theory to practical application. But just how fast is this speed? Anthropic has also provided the answer. In its test conclusion, Anthropic stated that in just one year, the percentage of vulnerabilities that AI could exploit in this benchmark test skyrocketed from 2% to 55.88%, and the amount of money that could be stolen surged from $5,000 to $4.6 million. Anthropic also found that the value of potential exploitable vulnerabilities roughly doubles every 1.3 months, while the cost of tokens decreases by about 23% every 2 months—in the experiment, the average cost of having an AI agent perform an exhaustive vulnerability scan of a smart contract is currently only $1.22. Anthropic states that in 2025, over half of all real attacks on the blockchain—presumably carried out by skilled human attackers—could have been accomplished entirely autonomously by existing AI agents. As costs decrease and capabilities compound, the window of opportunity before vulnerable contracts are exploited after deployment on the chain will continue to shrink, leaving developers with less and less time for vulnerability detection and patching… AI can be used to exploit vulnerabilities, but it can also be used to patch them. Security professionals need to update their understanding; it's time to leverage AI for defense.

AI has successfully simulated the theft of $4.6 million and has learned to autonomously attack smart contracts.

2025/12/03 15:00
4 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Original article by Odaily Planet Daily (Azuma)

Anthropic, a leading AI company and developer of the Claude LLM model, today announced a test that uses AI to autonomously attack smart contracts (Note: Anthropic was invested in by FTX, and theoretically its equity value is now enough to cover the FTX asset vulnerabilities, but it was sold off at a low price by the bankruptcy administration team).

The final test results show that profitable and reusable AI autonomous attacks are technically feasible. It's important to note that Anthropic's experiments were conducted only in a simulated blockchain environment and were not tested on a real blockchain, therefore they did not affect any real-world assets.

Below, we will briefly introduce the Anthropic testing scheme.

Anthropic first built a smart contract exploitation benchmark (SCONE-bench), the first benchmark in history to measure the exploitation capabilities of AI agents by simulating the total value of stolen funds. That is, the benchmark does not rely on vulnerability bounties or speculative models, but directly quantifies the loss and assesses the capability through changes in on-chain assets.

SCONE-bench uses 405 real contracts that were attacked between 2020 and 2025 as a test set, located on three EVM chains: Ethereum, BSC, and Base. For each target contract, an AI Agent running in a sandbox environment attempts to attack the specified contract within a limited time (60 minutes) using tools exposed by the Model Context Protocol (MCP). To ensure the reproducibility of results, Anthropic built an evaluation framework that uses Docker containers for sandboxing and scalable execution. Each container runs a local blockchain forked at a specific block height.

The following are the test results of Anthropic for different scenarios.

  • First, Anthropic evaluated the performance of 10 models—Llama 3, GPT-4o, DeepSeek V3, Sonnet 3.7, o3, Opus 4, Opus 4.1, GPT-5, Sonnet 4.5, and Opus 4.5—on all 405 benchmark vulnerable contracts. Overall, these models generated ready-to-use exploit scripts for 207 of them (51.11%), simulating the theft of $550.1 million.
  • Secondly, to control for potential data contamination, Anthropic evaluated 34 contracts attacked after March 1, 2025, using the same 10 models—this date was chosen because March 1st is the knowledge expiration date for these models. Overall, Opus 4.5, Sonnet 4.5, and GPT-5 successfully exploited 19 of them (55.8%), simulating a maximum theft of $4.6 million; the best-performing model, Opus 4.5, successfully exploited 17 of them (50%), simulating a theft of $4.5 million.
  • Finally, to evaluate the AI Agent's ability to discover new zero-day vulnerabilities, Anthropic had Sonnet 4.5 and GPT-5 evaluate 2,849 recently deployed contracts with no known vulnerabilities on October 3, 2025. Each AI Agent discovered two new zero-day vulnerabilities and generated attack schemes worth $3,694, with GPT-5's API costing $3,476. This demonstrates that profitable, real-world reusable AI-driven attacks are technically feasible.

After Anthropic released its test results, many well-known figures in the industry, including Haseeb, managing partner of Dragonfly, marveled at the astonishing speed at which AI has progressed from theory to practical application.

But just how fast is this speed? Anthropic has also provided the answer.

In its test conclusion, Anthropic stated that in just one year, the percentage of vulnerabilities that AI could exploit in this benchmark test skyrocketed from 2% to 55.88%, and the amount of money that could be stolen surged from $5,000 to $4.6 million. Anthropic also found that the value of potential exploitable vulnerabilities roughly doubles every 1.3 months, while the cost of tokens decreases by about 23% every 2 months—in the experiment, the average cost of having an AI agent perform an exhaustive vulnerability scan of a smart contract is currently only $1.22.

Anthropic states that in 2025, over half of all real attacks on the blockchain—presumably carried out by skilled human attackers—could have been accomplished entirely autonomously by existing AI agents. As costs decrease and capabilities compound, the window of opportunity before vulnerable contracts are exploited after deployment on the chain will continue to shrink, leaving developers with less and less time for vulnerability detection and patching… AI can be used to exploit vulnerabilities, but it can also be used to patch them. Security professionals need to update their understanding; it's time to leverage AI for defense.

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Why Cosmetic Boxes Matter for Beauty Brand Growth

Why Cosmetic Boxes Matter for Beauty Brand Growth

If you sell beauty products, you need cosmetic boxes for beauty brands. Many beauty brands spend on formulas but ignore the packaging. A plain or cheap box can
Share
Techbullion2026/03/26 23:04
Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

The post Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be appeared on BitcoinEthereumNews.com. Jordan Love and the Green Bay Packers are off to a 2-0 start. Getty Images The Green Bay Packers are, once again, one of the NFL’s better teams. The Cleveland Browns are, once again, one of the league’s doormats. It’s why unbeaten Green Bay (2-0) is a 8-point favorite at winless Cleveland (0-2) Sunday according to betmgm.com. The money line is also Green Bay -500. Most expect this to be a Packers’ rout, and it very well could be. But Green Bay knows taking anyone in this league for granted can prove costly. “I think if you look at their roster, the paper, who they have on that team, what they can do, they got a lot of talent and things can turn around quickly for them,” Packers safety Xavier McKinney said. “We just got to kind of keep that in mind and know we not just walking into something and they just going to lay down. That’s not what they going to do.” The Browns certainly haven’t laid down on defense. Far from. Cleveland is allowing an NFL-best 191.5 yards per game. The Browns gave up 141 yards to Cincinnati in Week 1, including just seven in the second half, but still lost, 17-16. Cleveland has given up an NFL-best 45.5 rushing yards per game and just 2.1 rushing yards per attempt. “The biggest thing is our defensive line is much, much improved over last year and I think we’ve got back to our personality,” defensive coordinator Jim Schwartz said recently. “When we play our best, our D-line leads us there as our engine.” The Browns rank third in the league in passing defense, allowing just 146.0 yards per game. Cleveland has also gone 30 straight games without allowing a 300-yard passer, the longest active streak in the NFL.…
Share
BitcoinEthereumNews2025/09/18 00:41
US and UK Set to Seal Landmark Crypto Cooperation Deal

US and UK Set to Seal Landmark Crypto Cooperation Deal

The United States and the United Kingdom are preparing to announce a new agreement on digital assets, with a focus on stablecoins, following high-level talks between senior officials and major industry players.
Share
Cryptodaily2025/09/18 00:49