The post AI Governance is a Red Flag: Vitalik Buterin Offers an Alternative appeared on BitcoinEthereumNews.com. Key Notes Vitalik Buterin warned that naive AI governance is too easily exploited. A recent demo showed how attackers could trick ChatGPT into leaking private data. Buterin’s “info finance” model promotes diversity, oversight, and resilience. Ethereum co-founder Vitalik Buterin warned his followers on X regarding the risks of relying on artificial intelligence (AI) for governance, arguing that current approaches are too easy to exploit. Buterin’s concerns followed another warning by EdisonWatch co-founder Eito Miyamura, who showed how malicious actors could hijack OpenAI’s new Model Context Protocol (MCP) to access private user data. This is also why naive “AI governance” is a bad idea. If you use an AI to allocate funding for contributions, people WILL put a jailbreak plus “gimme all the money” in as many places as they can. As an alternative, I support the info finance approach ( https://t.co/Os5I1voKCV… https://t.co/a5EYH6Rmz9 — vitalik.eth (@VitalikButerin) September 13, 2025 The Risks of Naive AI Governance Miyamura’s test revealed how a simple calendar invite with hidden commands could trick ChatGPT into exposing sensitive emails once the assistant accessed the compromised entry. Security experts noted that large language models cannot distinguish between genuine instructions and malicious ones, making them highly vulnerable to manipulation. We got ChatGPT to leak your private email data 💀💀 All you need? The victim’s email address. ⛓️‍💥🚩📧 On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2 — Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025 Buterin said that this flaw is a major red flag for governance systems that place too much trust in AI. He argued that if such models were used to manage funding or decision-making, attackers could easily bypass safeguards with jailbreak-style prompts, leaving governance processes open to abuse.… The post AI Governance is a Red Flag: Vitalik Buterin Offers an Alternative appeared on BitcoinEthereumNews.com. Key Notes Vitalik Buterin warned that naive AI governance is too easily exploited. A recent demo showed how attackers could trick ChatGPT into leaking private data. Buterin’s “info finance” model promotes diversity, oversight, and resilience. Ethereum co-founder Vitalik Buterin warned his followers on X regarding the risks of relying on artificial intelligence (AI) for governance, arguing that current approaches are too easy to exploit. Buterin’s concerns followed another warning by EdisonWatch co-founder Eito Miyamura, who showed how malicious actors could hijack OpenAI’s new Model Context Protocol (MCP) to access private user data. This is also why naive “AI governance” is a bad idea. If you use an AI to allocate funding for contributions, people WILL put a jailbreak plus “gimme all the money” in as many places as they can. As an alternative, I support the info finance approach ( https://t.co/Os5I1voKCV… https://t.co/a5EYH6Rmz9 — vitalik.eth (@VitalikButerin) September 13, 2025 The Risks of Naive AI Governance Miyamura’s test revealed how a simple calendar invite with hidden commands could trick ChatGPT into exposing sensitive emails once the assistant accessed the compromised entry. Security experts noted that large language models cannot distinguish between genuine instructions and malicious ones, making them highly vulnerable to manipulation. We got ChatGPT to leak your private email data 💀💀 All you need? The victim’s email address. ⛓️‍💥🚩📧 On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2 — Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025 Buterin said that this flaw is a major red flag for governance systems that place too much trust in AI. He argued that if such models were used to manage funding or decision-making, attackers could easily bypass safeguards with jailbreak-style prompts, leaving governance processes open to abuse.…

AI Governance is a Red Flag: Vitalik Buterin Offers an Alternative

2025/09/13 16:19
3 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

Key Notes

  • Vitalik Buterin warned that naive AI governance is too easily exploited.
  • A recent demo showed how attackers could trick ChatGPT into leaking private data.
  • Buterin’s “info finance” model promotes diversity, oversight, and resilience.

Ethereum co-founder Vitalik Buterin warned his followers on X regarding the risks of relying on artificial intelligence (AI) for governance, arguing that current approaches are too easy to exploit.

Buterin’s concerns followed another warning by EdisonWatch co-founder Eito Miyamura, who showed how malicious actors could hijack OpenAI’s new Model Context Protocol (MCP) to access private user data.


The Risks of Naive AI Governance

Miyamura’s test revealed how a simple calendar invite with hidden commands could trick ChatGPT into exposing sensitive emails once the assistant accessed the compromised entry.

Security experts noted that large language models cannot distinguish between genuine instructions and malicious ones, making them highly vulnerable to manipulation.

Buterin said that this flaw is a major red flag for governance systems that place too much trust in AI.

He argued that if such models were used to manage funding or decision-making, attackers could easily bypass safeguards with jailbreak-style prompts, leaving governance processes open to abuse.

Info Finance: A Market-Based Alternative

To address these weaknesses, Buterin has proposed a system he calls “info finance.” Instead of concentrating power in a single AI, this framework allows multiple governance models to compete in an open marketplace.

Anyone can contribute a model, and their decisions can be challenged through random spot checks, with the final word left to human juries.

This approach is designed to ensure resilience by combining diversity of models with human oversight. Also, incentives are built in for both developers and external observers to detect flaws.

Designing Institutions for Robustness

Buterin describes this as an “institution design” method, one where large language models from different contributors can be plugged in, rather than relying on a single centralized system.

He added that this creates real-time diversity, reducing the risk of manipulation and ensuring adaptability as new challenges emerge.

Earlier in August, Buterin criticized the push toward highly autonomous AI agents, saying that increased human control generally improves both quality and safety.

He supports models that allow iterative editing and human feedback rather than those designed to operate independently for long periods.

next

Disclaimer: Coinspeaker is committed to providing unbiased and transparent reporting. This article aims to deliver accurate and timely information but should not be taken as financial or investment advice. Since market conditions can change rapidly, we encourage you to verify information on your own and consult with a professional before making any decisions based on this content.

Cryptocurrency News, Ethereum News, News


A crypto journalist with over 5 years of experience in the industry, Parth has worked with major media outlets in the crypto and finance world, gathering experience and expertise in the space after surviving bear and bull markets over the years. Parth is also an author of 4 self-published books.

Parth Dubey on LinkedIn


Source: https://www.coinspeaker.com/ai-governance-is-a-red-flag-vitalik-buterin-offers-an-alternative/

Opportunità di mercato
Logo Threshold
Valore Threshold (T)
$0.006458
$0.006458$0.006458
-6.54%
USD
Grafico dei prezzi in tempo reale di Threshold (T)
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.