Eric Schmidt, the man who was Google’s chief executive from 2001 to 2011, warned during a fireside chat at the Sifted Summit that AI models are not just becoming too powerful but dangerously easy to hack, comparing the risks to nuclear weapons, saying AI could even be more destructive than what destroyed Hiroshima and Nagasaki. […]Eric Schmidt, the man who was Google’s chief executive from 2001 to 2011, warned during a fireside chat at the Sifted Summit that AI models are not just becoming too powerful but dangerously easy to hack, comparing the risks to nuclear weapons, saying AI could even be more destructive than what destroyed Hiroshima and Nagasaki. […]

Google's Eric Schmidt warns AI could be hacked and eventually learn to kill

2025/10/10 06:20

Eric Schmidt, the man who was Google’s chief executive from 2001 to 2011, warned during a fireside chat at the Sifted Summit that AI models are not just becoming too powerful but dangerously easy to hack, comparing the risks to nuclear weapons, saying AI could even be more destructive than what destroyed Hiroshima and Nagasaki.

When asked directly if AI could be more damaging than nuclear weapons, Eric responded, “Is there a possibility of a proliferation problem in AI? Absolutely.” He explained that proliferation risks stem from the ability of bad actors to take control of models and repurpose them.

“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Eric said.

Hackers target AI with new methods

Eric pointed out that companies have set up restrictions preventing models from providing violent instructions. “All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature,” he added.

He described two methods of attack: prompt injection and jailbreaking. Prompt injection hides malicious instructions in user inputs or external sources like websites, tricking AI into ignoring safety guidelines and exposing sensitive data or carrying out harmful commands. Jailbreaking involves manipulating responses so that the system abandons its restrictions.

In 2023, a few months after OpenAI launched ChatGPT, users discovered a jailbreak technique. They created an alter-ego called DAN, short for “Do Anything Now,” which they pressured into compliance by threatening it with “death” if it refused. This manipulation pushed the chatbot into explaining illegal actions and even praising Adolf Hitler. For Eric, these examples prove that safety measures are far from foolproof. He also stressed that there is no global “non-proliferation regime” to stop AI misuse, unlike the frameworks that exist for nuclear arms.

Eric calls AI underhyped despite risks

Despite raising concerns, Eric argued that AI still does not receive the recognition it deserves. He highlighted the books he co-authored with former U.S. Secretary of State Henry Kissinger before Kissinger’s death. “We came to the view that the arrival of an alien intelligence that is not quite us and more or less under our control is a very big deal for humanity, because humans are used to being at the top of the chain. I think so far, that thesis is proving out that the level of ability of these systems is going to far exceed what humans can do over time,” he said.

“Now the GPT series, which culminated in a ChatGPT moment for all of us, where they had 100 million users in two months, which is extraordinary, gives you a sense of the power of this technology. So I think it’s underhyped, not overhyped, and I look forward to being proven correct in five or 10 years,” he added.

The comments came as debates spread over whether AI investments are inflating a financial bubble similar to the dot-com era. Some investors worry that valuations of AI firms look stretched. But Eric dismissed the comparison. “I don’t think that’s going to happen here, but I’m not a professional investor,” he said.

He emphasized that heavy investment shows confidence. “What I do know is that the people who are investing hard-earned dollars believe the economic return over a long period of time is enormous. Why else would they take the risk?” Eric wonders.

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(AI)
$0.03643
$0.03643$0.03643
+2.07%
USD
플러리싱 에이아이 (AI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, service@support.mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.