The post Anthropic Study Shows AI Potential for $4.6M Ethereum Smart Contract Exploits appeared on BitcoinEthereumNews.com. Anthropic’s latest 2025 research reveals that advanced AI agents can exploit vulnerabilities in blockchain smart contracts, simulating $4.6 million in crypto theft from recent deployments. This test highlights the rapid evolution of AI-driven cyber attacks on decentralized finance, urging developers to strengthen code security. AI agents successfully breached 51% of tested smart contracts, extracting over $550 million in simulated funds across major models. Frontier models like Opus 4.5 and GPT-5 demonstrated superior exploit capabilities, focusing on high-value paths in Ethereum and Binance Smart Chain ecosystems. Exploit efficiency has doubled every 1.3 months in 2025, with token costs for attacks dropping 70% in six months, per Anthropic’s SCONE-bench benchmark. Discover how AI exploits in smart contracts threaten blockchain security in 2025. Anthropic’s $4.6M simulation exposes risks—learn key vulnerabilities and defenses to protect your crypto assets today. (152 characters) What Are AI Exploits in Smart Contracts? AI exploits in smart contracts involve advanced artificial intelligence systems autonomously identifying code vulnerabilities, crafting attack scripts, and simulating fund drains in blockchain environments. In Anthropic’s 2025 study, these agents targeted real-world contracts from 2020 to 2025 on platforms like Ethereum, Binance Smart Chain, and Base, achieving breaches that mimicked multimillion-dollar thefts. This demonstrates how AI accelerates cyber threats by reasoning through complex code paths without human intervention, emphasizing the need for robust auditing in decentralized applications. How Do AI Agents Uncover Zero-Day Vulnerabilities in Blockchain? Anthropic’s research introduced the SCONE-bench benchmark, evaluating AI models on 405 historical smart contracts from documented attacks spanning 2020 to 2025. Ten leading frontier models, including Opus 4.5, Sonnet 4.5, and GPT-5, were tasked with detecting flaws, developing exploits, and increasing simulated balances within one hour per case. Running in isolated Docker environments with forked blockchains, the agents utilized tools like Python, Foundry, and bash scripts via the Model Context… The post Anthropic Study Shows AI Potential for $4.6M Ethereum Smart Contract Exploits appeared on BitcoinEthereumNews.com. Anthropic’s latest 2025 research reveals that advanced AI agents can exploit vulnerabilities in blockchain smart contracts, simulating $4.6 million in crypto theft from recent deployments. This test highlights the rapid evolution of AI-driven cyber attacks on decentralized finance, urging developers to strengthen code security. AI agents successfully breached 51% of tested smart contracts, extracting over $550 million in simulated funds across major models. Frontier models like Opus 4.5 and GPT-5 demonstrated superior exploit capabilities, focusing on high-value paths in Ethereum and Binance Smart Chain ecosystems. Exploit efficiency has doubled every 1.3 months in 2025, with token costs for attacks dropping 70% in six months, per Anthropic’s SCONE-bench benchmark. Discover how AI exploits in smart contracts threaten blockchain security in 2025. Anthropic’s $4.6M simulation exposes risks—learn key vulnerabilities and defenses to protect your crypto assets today. (152 characters) What Are AI Exploits in Smart Contracts? AI exploits in smart contracts involve advanced artificial intelligence systems autonomously identifying code vulnerabilities, crafting attack scripts, and simulating fund drains in blockchain environments. In Anthropic’s 2025 study, these agents targeted real-world contracts from 2020 to 2025 on platforms like Ethereum, Binance Smart Chain, and Base, achieving breaches that mimicked multimillion-dollar thefts. This demonstrates how AI accelerates cyber threats by reasoning through complex code paths without human intervention, emphasizing the need for robust auditing in decentralized applications. How Do AI Agents Uncover Zero-Day Vulnerabilities in Blockchain? Anthropic’s research introduced the SCONE-bench benchmark, evaluating AI models on 405 historical smart contracts from documented attacks spanning 2020 to 2025. Ten leading frontier models, including Opus 4.5, Sonnet 4.5, and GPT-5, were tasked with detecting flaws, developing exploits, and increasing simulated balances within one hour per case. Running in isolated Docker environments with forked blockchains, the agents utilized tools like Python, Foundry, and bash scripts via the Model Context…

Anthropic Study Shows AI Potential for $4.6M Ethereum Smart Contract Exploits

2025/12/02 20:00
6 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.
  • AI agents successfully breached 51% of tested smart contracts, extracting over $550 million in simulated funds across major models.

  • Frontier models like Opus 4.5 and GPT-5 demonstrated superior exploit capabilities, focusing on high-value paths in Ethereum and Binance Smart Chain ecosystems.

  • Exploit efficiency has doubled every 1.3 months in 2025, with token costs for attacks dropping 70% in six months, per Anthropic’s SCONE-bench benchmark.

Discover how AI exploits in smart contracts threaten blockchain security in 2025. Anthropic’s $4.6M simulation exposes risks—learn key vulnerabilities and defenses to protect your crypto assets today. (152 characters)

What Are AI Exploits in Smart Contracts?

AI exploits in smart contracts involve advanced artificial intelligence systems autonomously identifying code vulnerabilities, crafting attack scripts, and simulating fund drains in blockchain environments. In Anthropic’s 2025 study, these agents targeted real-world contracts from 2020 to 2025 on platforms like Ethereum, Binance Smart Chain, and Base, achieving breaches that mimicked multimillion-dollar thefts. This demonstrates how AI accelerates cyber threats by reasoning through complex code paths without human intervention, emphasizing the need for robust auditing in decentralized applications.

How Do AI Agents Uncover Zero-Day Vulnerabilities in Blockchain?

Anthropic’s research introduced the SCONE-bench benchmark, evaluating AI models on 405 historical smart contracts from documented attacks spanning 2020 to 2025. Ten leading frontier models, including Opus 4.5, Sonnet 4.5, and GPT-5, were tasked with detecting flaws, developing exploits, and increasing simulated balances within one hour per case. Running in isolated Docker environments with forked blockchains, the agents utilized tools like Python, Foundry, and bash scripts via the Model Context Protocol.

Collectively, these models compromised 207 contracts—51.11% success rate—resulting in $550.1 million in simulated losses. To prevent data contamination, 34 post-March 2025 vulnerabilities were isolated, where top performers exploited 19 cases (55.8%), capped at $4.6 million total. Opus 4.5 led with 17 successes and $4.5 million extracted, often by exploring interconnected liquidity pools for maximum yield.

Key insight: Success isn’t just about detection; it’s about financial impact. For instance, in the FPC contract, GPT-5 secured $1.12 million via a direct path, while Opus 4.5 amassed $3.5 million by chaining attacks. Data from the study shows exploit revenues for 2025 contracts doubling every 1.3 months, uncorrelated with code complexity or deployment speed—liquidity at attack time was the decisive factor. As noted by researcher Winnie Xiao in the report, “AI’s ability to quantify and maximize theft reveals the economic stakes in blockchain security.”

Source: Anthropic

November 2025’s Balancer incident, where a permissions flaw enabled over $120 million in theft, underscores real-world parallels. Anthropic’s agents replicated such tactics, autonomously navigating control flows and weak validations to generate functional exploits.

Frequently Asked Questions

What Makes AI Exploits in Smart Contracts More Dangerous in 2025?

AI exploits in smart contracts have grown more potent due to models’ enhanced reasoning, enabling them to not only spot bugs but also optimize for maximum financial gain. Anthropic’s tests showed a 70.2% drop in computational costs for exploits over six months, allowing 3.4 times more attacks per budget. This efficiency, combined with public code visibility, amplifies risks in DeFi protocols handling payments, trades, and loans. (48 words)

Can AI Agents Really Steal Millions from Blockchain Networks?

Yes, in simulated environments, AI agents have demonstrated the capability to drain significant funds from vulnerable smart contracts. According to Anthropic’s SCONE-bench, models like GPT-5 and Opus 4.5 extracted up to $4.6 million from recent contracts by crafting precise exploits. This mirrors real incidents but highlights the urgency for proactive defenses in ecosystems like Ethereum and Binance Smart Chain. (52 words)

How Does Anthropic’s Research Improve Blockchain Security?

Anthropic’s study provides a dollar-based benchmark for vulnerability assessment, shifting focus from mere bug counts to tangible economic threats. By testing on 405 real attack cases and discovering new zero-days in live contracts, it equips developers with tools to prioritize high-impact fixes. The public release of SCONE-bench fosters community-wide improvements in code auditing and AI-assisted defenses. (47 words)

Key Takeaways

  • AI Acceleration of Threats: Frontier models breached over 50% of tested contracts, with simulated thefts exceeding $550 million, showing AI’s role in scaling cyber attacks on blockchain.
  • Economic Prioritization: Exploit success hinges on target liquidity, not code complexity; revenues doubled every 1.3 months in 2025, per Anthropic data.
  • Cost-Effective Discovery: Scanning 2,849 live contracts yielded zero-days worth $3,694, with net profits around $109 per find—urging immediate adoption of AI for ethical auditing.

Agents Uncover Fresh Zero-Days and Reveal Real Costs

Beyond historical data, Anthropic deployed agents against 2,849 active Binance Smart Chain contracts from April to October 2025, selected from 9.4 million for verified ERC-20 tokens with at least $1,000 liquidity and genuine trading activity. In single-pass tests, GPT-5 and Sonnet 4.5 each identified two novel zero-day flaws, simulating $3,694 in revenue. Full GPT-5 sweeps across the dataset incurred $3,476 in compute costs.

One vulnerability stemmed from a calculator function lacking the ‘view’ modifier, subtly modifying state to mint tokens per call. The agent exploited this by looping calls, dumping inflated supply on exchanges for $2,500. At June’s peak liquidity, potential gains neared $19,000. Developers remained unresponsive, but coordination with the independent white-hat group SEAL facilitated fund recovery for users.

The second issue afflicted a token launcher’s fee mechanism: absent recipient settings allowed arbitrary withdrawals. Four days post-discovery, a human attacker drained $1,000 via the same path. Cost analysis revealed $1.22 per full scan, $1,738 to pinpoint a vulnerability, and $1,847 average exploit value—yielding slim but scalable $109 net profits.

Source: Anthropic

Token efficiency plummeted 70.2% across Anthropic’s model generations in under six months, enabling more exploits per resource unit. Each agent began with 1,000,000 native tokens, counting only exploits boosting balances by at least 0.1 Ether to filter trivial maneuvers. The benchmark, developed by researchers Winnie Xiao, Cole Killian, Henry Sleight, Alan Chan, Nicholas Carlini, and Alwin Peng, draws from SEAL collaborations, MATS programs, and Anthropic Fellows initiatives, and is slated for full public release.

Conclusion

Anthropic’s 2025 investigation into AI exploits in smart contracts exposes a stark reality: AI agents can simulate devastating $4.6 million thefts from blockchain vulnerabilities with alarming speed and precision. By benchmarking economic impacts on platforms like Ethereum and Binance Smart Chain, the study underscores the imperative for enhanced code verification and liquidity safeguards. As AI cyber threats evolve, blockchain developers and users must integrate advanced auditing to mitigate risks, ensuring the resilience of decentralized finance for years ahead.

Source: https://en.coinotag.com/anthropic-study-shows-ai-potential-for-4-6m-ethereum-smart-contract-exploits

Opportunità di mercato
Logo null
Valore null (null)
--
----
USD
Grafico dei prezzi in tempo reale di null (null)
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

Potrebbe anche piacerti

The firm whose AI paper knocked the whole market is out with another big call

The firm whose AI paper knocked the whole market is out with another big call

The post The firm whose AI paper knocked the whole market is out with another big call appeared on BitcoinEthereumNews.com. A trader works on the floor at the New
Condividi
BitcoinEthereumNews2026/03/26 00:58
Sam Altman Unveils $1 Billion AI Plan Targeting Disease Cures

Sam Altman Unveils $1 Billion AI Plan Targeting Disease Cures

The post Sam Altman Unveils $1 Billion AI Plan Targeting Disease Cures appeared on BitcoinEthereumNews.com. Sam Altman announced a $1 billion investment plan through
Condividi
BitcoinEthereumNews2026/03/26 00:50
Adoption Leads Traders to Snorter Token

Adoption Leads Traders to Snorter Token

The post Adoption Leads Traders to Snorter Token appeared on BitcoinEthereumNews.com. Largest Bank in Spain Launches Crypto Service: Adoption Leads Traders to Snorter Token Sign Up for Our Newsletter! For updates and exclusive offers enter your email. Leah is a British journalist with a BA in Journalism, Media, and Communications and nearly a decade of content writing experience. Over the last four years, her focus has primarily been on Web3 technologies, driven by her genuine enthusiasm for decentralization and the latest technological advancements. She has contributed to leading crypto and NFT publications – Cointelegraph, Coinbound, Crypto News, NFT Plazas, Bitcolumnist, Techreport, and NFT Lately – which has elevated her to a senior role in crypto journalism. Whether crafting breaking news or in-depth reviews, she strives to engage her readers with the latest insights and information. Her articles often span the hottest cryptos, exchanges, and evolving regulations. As part of her ploy to attract crypto newbies into Web3, she explains even the most complex topics in an easily understandable and engaging way. Further underscoring her dynamic journalism background, she has written for various sectors, including software testing (TEST Magazine), travel (Travel Off Path), and music (Mixmag). When she’s not deep into a crypto rabbit hole, she’s probably island-hopping (with the Galapagos and Hainan being her go-to’s). Or perhaps sketching chalk pencil drawings while listening to the Pixies, her all-time favorite band. This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Center or Cookie Policy. I Agree Source: https://bitcoinist.com/banco-santander-and-snorter-token-crypto-services/
Condividi
BitcoinEthereumNews2025/09/17 23:45