Vitalik Buterin has shared concerns regarding the increasingly controversial uses of the theoretical concept of “AI safety” by companies and governments.  ButerinVitalik Buterin has shared concerns regarding the increasingly controversial uses of the theoretical concept of “AI safety” by companies and governments.  Buterin

Can ‘AI safety’ be used as a global dominance tool?

2026/03/13 23:50
4 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Vitalik Buterin has shared concerns regarding the increasingly controversial uses of the theoretical concept of “AI safety” by companies and governments. 

Buterin explained on the social media platform X that leading companies within the AI space, like Anthropic, cannot dictate what measures are suitable or not for safety, as that leads to a system where the rules are crafted by the strongest.

Can ‘AI safety’ be used as a global dominance tool?

Vitalik Buterin recently took to the social media platform X to share his concerns about the concept of AI safety being appropriated by large corporations and national interests.

For example, Anthropic recently received praise for refusing to allow the Department of Welfare (DoW) or other government entities to use its Claude models for mass surveillance or fully autonomous weaponry.m.

However, the company also canceled its pause-on-risk safety pledge that compelled the company to unconditionally halt all training and deployment until safety measures caught up if it ever developed an AI model whose capabilities outpaced the company’s ability to prove the model was safe.

Vitalik pointed out that Anthropic’s previous criticism of its competitors for learning from Claude’s outputs drew sharp backlash from critics, particularly in China, who argued that Claude itself trained its models on the vast, public knowledge of the internet.

Anthropic claims that its problem with open-source competitors is that they lack the necessary safety guardrails and pose risks, but why does Anthropic get to decide which safety measures are suitable?

Buterin stated that Anthropic’s actions suggest a system where “rules are crafted by the strongest.”

He expressed a fear that if AI safety becomes indistinguishable from a “our company/our country deserves to run the world” mentality, it will create a more dangerous world.

He argues that if safety regulations inevitably exempt national security organizations, the regulations will become fragile. This is especially relevant as recent news confirms that major AI labs are increasingly seeking multi-billion-dollar partnerships with defense contractors to provide secure AI environments for military use.

Is restricting AI dangerous?

Years ago, Vitalik became one of the Future of Life Institute’s (FLI’s) largest donors. In 2021, he was gifted a massive supply of Shiba Inu (SHIB) tokens by the token’s creators. When the dog coin bubble was at its peak, the book value was over $1 billion. Vitalik scrambled to donate the funds before interest declined and sent roughly $500 million in SHIB to FLI.

At the time, the FLI was focused on risks like bio-threats and nuclear war. However, FLI has since shifted its focus toward aggressive political action and lobbying, often pushing for regulations that Vitalik finds worrying. Specifically, he disagrees with their focus on putting guards into AI models to make them refuse “bad stuff.”

Vitalik views these restrictions as fragile solutions because they can be easily bypassed by jailbreaking or fine-tuning.

More importantly, he fears these strategies lead to a dark place where open-source AI is banned to maintain a good-guy monopoly.

Vitalik is instead advocating for a system called defensive accelerationism (d/acc). This philosophy suggests that the best way to handle dangerous technology is to build and open-source the shields first.

He recently allocated $40 million toward projects like secure hardware, biodefense, and cybersecurity to support his ideology.

Secure hardware makes computer chips unhackable, so they cannot be used for mass spying. Biodefense involves developing advanced air filtering and passive PCR testing to detect and stop pandemics early. Investments in cybersecurity will improve software verifiability so that AI-driven attacks cannot easily take down critical infrastructure.

If you want a calmer entry point into DeFi crypto without the usual hype, start with this free video.

Market Opportunity
Spacecoin Logo
Spacecoin Price(SPACE)
$0.007524
$0.007524$0.007524
+2.78%
USD
Spacecoin (SPACE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Top Low-Cost Cryptocurrencies Analysts Are Watching for 2027

Top Low-Cost Cryptocurrencies Analysts Are Watching for 2027

Investors are now hunting for projects that combine affordability with actual utility. While famous names still hold the spotlight, a new crypto era of decentralized
Share
Techbullion2026/03/14 10:49
Google Cloud taps EigenLayer to bring trust to agentic payments

Google Cloud taps EigenLayer to bring trust to agentic payments

The post Google Cloud taps EigenLayer to bring trust to agentic payments appeared on BitcoinEthereumNews.com. Two days after unveiling AP2 — a universal payment layer for AI agents that supports everything from credit cards to stablecoins — Google and EigenLayer have released details of their partnership to bring verifiability and restaking security to the stack, using Ethereum. In addition to enabling verifiable compute and slashing-backed payment coordination, EigenCloud will support insured and sovereign AI agents, which introduce consequences for failure or deviation from specified behavior. Sovereign agents are positioned as autonomous actors that can own property, make decisions, and execute actions independently — think smart contracts with embedded intelligence. From demos to dollars AP2 extends Google’s agent-to-agent (A2A) protocol using the HTTP 402 status code — long reserved for “payment required” — to standardize payment requests between agents across different networks. It already supports stablecoins like USDC, and Coinbase has demoed an agent checkout using its Wallet-as-a-Service. Paired with a system like Lit Protocol’s Vincent — which enforces per-action policies and key custody at signing — Google’s AP2 with EigenCloud’s verifiability and cross-chain settlement could form an end-to-end trust loop. Payments between agents aren’t as simple as they are often made to sound by “Crypto x AI” LARPs. When an AI agent requests a payment in USDC on Base and the payer’s funds are locked in ETH on Arbitrum, the transaction stalls — unless something abstracts the bridging, swapping and delivery. That’s where EigenCloud comes in. Sreeram Kannan, founder of EigenLayer, said the integration will create agents that not only run on-chain verifiable compute, but are also economically incentivized to behave within programmable bounds. Through restaked operators, EigenCloud powers a verifiable payment service that handles asset routing and chain abstraction, with dishonest behavior subject to slashing. It also introduces cryptographic accountability to the agents themselves, enabling proofs that an agent actually executed the task it…
Share
BitcoinEthereumNews2025/09/19 03:52
SEC Approves First US Multi-Crypto ETP — Insights from Grayscale CEO

SEC Approves First US Multi-Crypto ETP — Insights from Grayscale CEO

The U.S. Securities and Exchange Commission (SEC) has greenlit the first multi-asset cryptocurrency exchange-traded product (ETP) in the United States, authorizing Grayscale’s Digital Large Cap Fund (GLDC) for public listing. This groundbreaking development offers investors exposure to five leading cryptocurrencies: Bitcoin (BTC), Ethereum (ETH), XRP (XRP), Solana (SOL), and Cardano (ADA). The approval, disclosed in [...]
Share
Crypto Breaking News2025/09/18 17:26