The post Over 200 leaders, experts demand global ‘red lines’ for AI use appeared on BitcoinEthereumNews.com. Homepage > News > Business > Over 200 leaders, experts demand global ‘red lines’ for AI use More than 200 prominent politicians, public figures, and scientists released a letter calling for urgent binding international “red lines” to prevent dangerous artificial intelligence (AI) use. The letter was released to coincide with the 80th session of the United Nations General Assembly (UNGA). The illustrious list of signees included ten Nobel Prize winners, eight former heads of state and ministers, and several leading AI researchers. They were joined by over 70 organizations worldwide, including Taiwan AI Labs, the Foundation for European Progressive Studies, AI Governance and Safety Canada, and the Beijing Academy of Artificial Intelligence. “AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” read the letter. “We urgently call for international red lines to prevent unacceptable AI risks.” Among the concerned figures putting their name to the call for AI caution was Nobel Peace Prize laureate Maria Ressa, who announced the letter in her opening speech at the UN General Assembly’s High-Level Week on Monday. She warned that “without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation.” Ressa added that “history teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.” The brief letter, published on a dedicated site called ‘red-lives.ai’, raised fears that AI could soon “far surpass human capabilities” and, in so doing, escalate risks such as widespread disinformation and manipulation of individuals. This, it claimed, could lead to national and international security concerns, mass unemployment, and systematic human rights violations. “Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in… The post Over 200 leaders, experts demand global ‘red lines’ for AI use appeared on BitcoinEthereumNews.com. Homepage > News > Business > Over 200 leaders, experts demand global ‘red lines’ for AI use More than 200 prominent politicians, public figures, and scientists released a letter calling for urgent binding international “red lines” to prevent dangerous artificial intelligence (AI) use. The letter was released to coincide with the 80th session of the United Nations General Assembly (UNGA). The illustrious list of signees included ten Nobel Prize winners, eight former heads of state and ministers, and several leading AI researchers. They were joined by over 70 organizations worldwide, including Taiwan AI Labs, the Foundation for European Progressive Studies, AI Governance and Safety Canada, and the Beijing Academy of Artificial Intelligence. “AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” read the letter. “We urgently call for international red lines to prevent unacceptable AI risks.” Among the concerned figures putting their name to the call for AI caution was Nobel Peace Prize laureate Maria Ressa, who announced the letter in her opening speech at the UN General Assembly’s High-Level Week on Monday. She warned that “without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation.” Ressa added that “history teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.” The brief letter, published on a dedicated site called ‘red-lives.ai’, raised fears that AI could soon “far surpass human capabilities” and, in so doing, escalate risks such as widespread disinformation and manipulation of individuals. This, it claimed, could lead to national and international security concerns, mass unemployment, and systematic human rights violations. “Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in…

Over 200 leaders, experts demand global ‘red lines’ for AI use

More than 200 prominent politicians, public figures, and scientists released a letter calling for urgent binding international “red lines” to prevent dangerous artificial intelligence (AI) use. The letter was released to coincide with the 80th session of the United Nations General Assembly (UNGA).

The illustrious list of signees included ten Nobel Prize winners, eight former heads of state and ministers, and several leading AI researchers. They were joined by over 70 organizations worldwide, including Taiwan AI Labs, the Foundation for European Progressive Studies, AI Governance and Safety Canada, and the Beijing Academy of Artificial Intelligence.

“AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” read the letter. “We urgently call for international red lines to prevent unacceptable AI risks.”

Among the concerned figures putting their name to the call for AI caution was Nobel Peace Prize laureate Maria Ressa, who announced the letter in her opening speech at the UN General Assembly’s High-Level Week on Monday.

She warned that “without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation.”

Ressa added that “history teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.”

The brief letter, published on a dedicated site called ‘red-lives.ai’, raised fears that AI could soon “far surpass human capabilities” and, in so doing, escalate risks such as widespread disinformation and manipulation of individuals. This, it claimed, could lead to national and international security concerns, mass unemployment, and systematic human rights violations.

“Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world,” the letter warned. “Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years.”

In order to meet this challenge, the various public figures and organizations who signed the letter called on governments to act decisively, “before the window for meaningful intervention closes.”

Specifically, they suggested that an international agreement on clear and verifiable red lines, that build upon and enforce existing global frameworks and voluntary corporate commitments, is necessary for preventing these “unacceptable” risks.

“We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026,” said the letter.

This not-too-distant date was chosen because, according to the letter, the pace of AI development means that risks once seen as speculative are already emerging.

“Waiting longer could mean less room, both technically and politically, for effective intervention, while the likelihood of cross-border harm increases sharply,” said the signees. “That is why 2026 must be the year the world acts.”

Former President of the UN General Assembly, Csaba Kőrösi, one of the notable signatures on the letter, argued that “humanity in its long history has never met intelligence higher than ours. Within a few years, we will. But we are far from being prepared for it in terms of regulations, safeguards, and governance.”

This sentiment was echoed by Ahmet Üzümcü, former Director General of the Organization for the Prohibition of Chemical Weapons, another signee of the letter, who said, “it is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.

Former President of Ireland Mary Robinson and former President of Colombia Juan Manuel Santos also put their names to the call. In addition to these international leaders were Nobel Prize recipients in chemistry, economics, peace and physics, as well as popular and award-winning authors such as Stephen Fry and Yuval Noah Harari.

“For thousands of years, humans have learned—sometimes the hard way—that powerful technologies can have dangerous as well as beneficial consequences,” said Harari, author of the 2011 book ‘Sapiens: A Brief History of Humankind,’ that spent 182 weeks in The New York Times best-seller list. “With AI, we may not get a chance to learn from our mistakes, because AI is the first technology that can make decisions by itself, invent new ideas by itself, and escape our control.”

He added that “humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”

As well as being timed for the opening of latest UN General Assembly, the letter’s release fortuitously fell the same day that OpenAI and Nvidia (NASDAQ: NVDA) announced a “landmark strategic partnership” for the deployment of at least 10 gigawatts of Nvidia systems and a $100 billion investment from the company to help power OpenAI’s next-generation of AI infrastructure.

This deal between two of the world’s largest players in the AI space served to underscore the urgency of the AI red line letter.

Possible red lines

The website for the letter also provided a few examples of what these hypothetical red lines might look like, in the context of AI, suggesting that they could focus either on AI behaviors (what the AI systems can do) or on AI uses (how humans and organizations are allowed to use such systems).

The site emphasized that the campaign did not endorse any specific red lines, but provided several examples related to the areas of most concern. This included prohibiting: the delegation of nuclear launch authority, or critical command-and-control decisions, to AI systems; the deployment and use of weapon systems used for killing a human without meaningful human control and accountability; the use of AI systems for social scoring and mass surveillance; and the uncontrolled release of cyber offensive agents capable of disrupting critical infrastructure.

In terms of the feasibility of any of these controls, the site noted that certain red lines on AI behaviors are already being operationalized in the ‘Safety and Security’ frameworks of AI companies, such as Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, and DeepMind’s Frontier Safety Framework.

Back to the top ↑

A realistic goal

In order to further demonstrate that the letter’s goals are reasonable, the site gave a few more real-world examples from history that shows “international cooperation on high-stakes risks is entirely achievable.”

Two such cases were the Treaty on the Non-Proliferation of Nuclear Weapons (1970) and the Biological Weapons Convention (1975), which were negotiated and ratified at the height of the Cold War, “proving that cooperation is possible despite mutual distrust and hostility.”

More recently, it also pointed to the 2025 ‘High Seas Treaty’, which “provided a comprehensive set of regulations for high seas conservation and serves as a sign of optimism for international diplomacy.”

Back to the top ↑

If controlled, AI can be a force for good

The concerns raised by the public figures, along with calls for increased rules and protections, came the same day that the UN’s climate chief, Simon Stiell, gave an interview to U.K. broadsheet The Guardian, in which he said governments must step in to regulate AI technology.

Steill argued that if governments and authorities control AI, it could prove a “gamechanger” when it comes to combatting the climate crisis.

“AI is not a ready-made solution, and it carries risks. But it can also be a gamechanger,” the UN climate chief told The Guardian. “Done properly, AI releases human capacity, not replaces it. Most important is its power to drive real-world outcomes: managing microgrids, mapping climate risk, guiding resilient planning.”

Stiell’s comments demonstrate that there is a desire from current international leaders—at least at the UN—to see appropriate laws, regulation and controls for AI, as much as to utilize the technology’s potential for positive change.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Back to the top ↑

Watch: Demonstrating the potential of blockchain’s fusion with AI

title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=””>

Source: https://coingeek.com/over-200-leaders-experts-demand-global-red-lines-for-ai-use/

Market Opportunity
RedStone Logo
RedStone Price(RED)
$0.2275
$0.2275$0.2275
+0.04%
USD
RedStone (RED) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Q4 2025 May Have Marked the End of the Crypto Bear Market: Bitwise

Q4 2025 May Have Marked the End of the Crypto Bear Market: Bitwise

The fourth quarter of 2025 may have quietly signaled the end of the crypto bear market, according to a new report from digital asset manager Bitwise, even as prices
Share
CryptoNews2026/01/22 15:06
CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

The post CEO Sandeep Nailwal Shared Highlights About RWA on Polygon appeared on BitcoinEthereumNews.com. Polygon CEO Sandeep Nailwal highlighted Polygon’s lead in global bonds, Spiko US T-Bill, and Spiko Euro T-Bill. Polygon published an X post to share that its roadmap to GigaGas was still scaling. Sentiments around POL price were last seen to be bearish. Polygon CEO Sandeep Nailwal shared key pointers from the Dune and RWA.xyz report. These pertain to highlights about RWA on Polygon. Simultaneously, Polygon underlined its roadmap towards GigaGas. Sentiments around POL price were last seen fumbling under bearish emotions. Polygon CEO Sandeep Nailwal on Polygon RWA CEO Sandeep Nailwal highlighted three key points from the Dune and RWA.xyz report. The Chief Executive of Polygon maintained that Polygon PoS was hosting RWA TVL worth $1.13 billion across 269 assets plus 2,900 holders. Nailwal confirmed from the report that RWA was happening on Polygon. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 The X post published by Polygon CEO Sandeep Nailwal underlined that the ecosystem was leading in global bonds by holding a 62% share of tokenized global bonds. He further highlighted that Polygon was leading with Spiko US T-Bill at approximately 29% share of TVL along with Ethereum, adding that the ecosystem had more than 50% share in the number of holders. Finally, Sandeep highlighted from the report that there was a strong adoption for Spiko Euro T-Bill with 38% share of TVL. He added that 68% of returns were on Polygon across all the chains. Polygon Roadmap to GigaGas In a different update from Polygon, the community…
Share
BitcoinEthereumNews2025/09/18 01:10
WWE Royal Rumble 2026: Confirmed Entrants, Updated Card

WWE Royal Rumble 2026: Confirmed Entrants, Updated Card

The post WWE Royal Rumble 2026: Confirmed Entrants, Updated Card appeared on BitcoinEthereumNews.com. DUESSELDORF, GERMANY – JANUARY 12: Liv Morgan and Roxanne
Share
BitcoinEthereumNews2026/01/22 15:14