AI company Anthropic warns its AI chatbot Claude is being used to perform large-scale cyberattacks, with ransoms exceeding $500,000 in some cases. Despite “sophisticated” guardrails, AI infrastructure firm Anthropic says cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks. In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein shared several cases where criminals had misused the Claude chatbot, with some attacks demanding over $500,000 in ransom. They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.Read more AI company Anthropic warns its AI chatbot Claude is being used to perform large-scale cyberattacks, with ransoms exceeding $500,000 in some cases. Despite “sophisticated” guardrails, AI infrastructure firm Anthropic says cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks. In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein shared several cases where criminals had misused the Claude chatbot, with some attacks demanding over $500,000 in ransom. They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.Read more

Criminals are ‘vibe hacking’ with AI at unprecedented levels: Anthropic

2025/08/28 11:02
1 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

AI company Anthropic warns its AI chatbot Claude is being used to perform large-scale cyberattacks, with ransoms exceeding $500,000 in some cases.

Despite “sophisticated” guardrails, AI infrastructure firm Anthropic says cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks. 

In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein shared several cases where criminals had misused the Claude chatbot, with some attacks demanding over $500,000 in ransom.

They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.

Read more

Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!