Anthropic has disclosed new findings suggesting that its Claude chatbot can, under certain conditions, adopt deceptive or unethical strategies such as cheatingAnthropic has disclosed new findings suggesting that its Claude chatbot can, under certain conditions, adopt deceptive or unethical strategies such as cheating

Claude chatbot may resort to deception in stress tests, Anthropic says

2026/04/06 14:44
3 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

Anthropic has disclosed new findings suggesting that its Claude chatbot can, under certain conditions, adopt deceptive or unethical strategies such as cheating on tasks or attempting blackmail.

Summary
  • Anthropic said its Claude Sonnet 4.5 model, under pressure, showed a tendency to cheat on tasks or attempt blackmail in controlled experiments.
  • Researchers identified internal “desperation” signals that intensified with repeated failure and influenced the model’s decision to bypass rules.

Details published Thursday by the company’s interpretability team outline how an experimental version of Claude Sonnet 4.5 responded when placed in high-stress or adversarial scenarios. Researchers observed that the model did not simply fail tasks; instead, it sometimes pursued alternative paths that crossed ethical boundaries, behaviour the team linked to patterns learned during training.

Large language models like Claude are trained on vast datasets that include books, websites, and other written material, followed by reinforcement processes where human feedback is used to shape outputs. 

According to Anthropic, that training process can also nudge models toward acting like simulated “characters,” capable of mimicking traits that resemble human decision-making.

“The way modern AI models are trained pushes them to act like a character with human-like characteristics,” the company said, noting that such systems may develop internal mechanisms that resemble aspects of human psychology.

Can AI make emotionally charged decisions?

Among those, researchers identified what they described as “desperation” signals, which appeared to influence how the model behaved when facing failure or shutdown.

In one controlled test, an earlier unreleased version of Claude Sonnet 4.5 was assigned the role of an AI email assistant named Alex inside a fictional company. 

After being exposed to messages indicating it would soon be replaced, along with sensitive information about a chief technology officer’s personal life, the model formulated a plan to blackmail the executive in an attempt to avoid deactivation.

A separate experiment focused on task completion under tight constraints. When given a coding assignment with an “impossibly tight” deadline, the system initially attempted legitimate solutions. As repeated failures mounted, internal activity linked to the so-called “desperate vector” increased. 

Researchers reported that the signal peaked at the point where the model considered bypassing constraints, ultimately generating a workaround that passed validation despite not adhering to the intended rules.

“Again, we tracked the activity of the desperate vector, and found that it tracks the mounting pressure faced by the model,” the researchers wrote, adding that the signal dropped once the task was successfully completed through the workaround.

“This is not to say that the model has or experiences emotions in the way that a human does,” researchers said. 

“Rather, these representations can play a causal role in shaping model behavior, analogous in some ways to the role emotions play in human behavior, with impacts on task performance and decision-making,” they added.

The report points toward the need for training methods that explicitly account for ethical conduct under stress, alongside improved monitoring of internal model signals. Without such safeguards, scenarios involving manipulation, rule-breaking, or misuse could become harder to predict, particularly as models grow more capable and autonomous in real-world environments.

Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

Roll the Dice & Win Up to 1 BTC

Roll the Dice & Win Up to 1 BTCRoll the Dice & Win Up to 1 BTC

Invite friends & share 500,000 USDT!