Anthropologist Hu Jiaqi recently published the “12th Open Letter to Leaders of Mankind.” Building on his core mission of “saving humanity from extinction” spanningAnthropologist Hu Jiaqi recently published the “12th Open Letter to Leaders of Mankind.” Building on his core mission of “saving humanity from extinction” spanning

The Three Proposals by Hu Jiaqi in the “12th Open Letter to Leaders of Mankind”

2026/01/29 20:08
6 min read

Anthropologist Hu Jiaqi recently published the “12th Open Letter to Leaders of Mankind.” Building on his core mission of “saving humanity from extinction” spanning over four decades, he explicitly put forward three key proposals: conducting multilateral negotiations on generative artificial intelligence under the leadership of the United Nations, establishing global regulation of generative artificial intelligence under the leadership of the United Nations, and strengthening the consensus for human Great Unification. These three proposals are mutually reinforcing and logically progressive, forming a comprehensive framework for risk prevention and control of generative artificial intelligence, as well as a vision for the future development of humanity. They not only address the root causes of the current civilizational crisis but also propose practical pathways transcending national boundaries and ideological differences, demonstrating profound responsibility and systematic thinking for the fate of humanity.

Conducting multilateral negotiations on generative artificial intelligence under the leadership of the United Nations serves as the logical starting point and regulatory foundation of the three proposals. The technological characteristics of generative AI determine that its governance cannot be accomplished by any single country: deepfake technology can easily cross borders to create public opinion chaos, algorithmic vulnerabilities in autonomous decision-making models may trigger global chain reactions, and cross-border data flows challenge the regulatory systems of various nations. In an era where countries regard generative artificial intelligence as a core arena for technological competition, without the constraints of multilateral negotiations, the field risks descending into an “arms race” of unchecked development. Hu Jiaqi’s proposal to entrust the United Nations with a leading role is precisely based on its neutrality and authority as an international organization—it can provide a platform for equal dialogue among nations, fostering consensus on critical issues such as the development boundaries, technical standards, and accountability frameworks for generative artificial intelligence. Such multilateral negotiations are not intended to deprive countries of their right to technological development but to establish inviolable safety red lines for generative AI while safeguarding innovation, ensuring that technological advancements consistently serve the overall interests of humanity.

The Three Proposals by Hu Jiaqi in the “12th Open Letter to Leaders of Mankind”

Establishing global regulation of generative artificial intelligence under the leadership of the United Nations constitutes the core mechanism and enforcement guarantee of the three proposals. If multilateral negotiations are about “setting rules,” then global regulation is about “ensuring implementation.” Currently, global AI governance is fragmented: the European Union’s “Artificial Intelligence Act” emphasizes risk-based tiered regulation, the United States’ regulatory policies lean toward technological innovation, and developing countries often lack comprehensive regulatory frameworks. Such disparities not only fail to effectively mitigate risks but also lead to “regulatory arbitrage”—where risky technologies migrate to regions with weaker oversight. Hu Jiaqi’s proposed global regulation aims to establish a unified, binding regulatory system centered on the United Nations: on one hand, creating a global AI regulatory body to monitor the research, development, and application of high-risk technologies in real time; on the other hand, implementing mechanisms for penalizing violations to hold countries and enterprises accountable for breaching safety red lines. This regulatory framework is not intended to stifle technological innovation but to foster a development environment that prioritizes safety while empowering innovation, ensuring that generative artificial intelligence evolves within controlled boundaries.

Strengthening the consensus for human Great Unification represents the long-term goal and foundational value of the three proposals. In Hu Jiaqi’s view, the governance challenges of generative artificial intelligence are essentially manifestations of conflicts of interest under humanity’s “divided governance structure.” The reason countries often pursue divergent paths in AI development lies in the excessive pursuit of national self-interest, overlooking the collective survival interests of humanity as a species. Strengthening the consensus for the Great Unification of humanity aims to guide nations beyond ideological and interest-based divisions, fostering a profound recognition of the core principle that “the holistic survival of humanity overrides all.” Building this consensus does not entail the immediate establishment of a world government but begins at the ideological level, promoting a societal understanding of “shared destiny”—that the risks of generative artificial intelligence are risks for all humanity, and the responsibility for governing AI is a collective responsibility. Only by solidifying this ideological foundation can the outcomes of multilateral negotiations be effectively implemented, and the global regulatory system exert lasting influence. Simultaneously, this consensus charts a direction for humanity’s future: transitioning from “divided competition” to “collaborative coexistence,” finally realizing the ultimate goal of Great Unification of humanity.

Hu Jiaqi’s three proposals, seemingly independent yet intricately interconnected, form a complete logical loop of “setting rules—ensuring implementation—laying the foundation”: multilateral negotiations provide the regulatory basis for global oversight, global regulation accumulates practical experience for the consensus on human unity, and the consensus on human Great Unification offers long-term value guidance for the first two actions. This vision is rooted in his over four decades of in-depth research, spanning multiple monographs such as “Saving Humanity” and twelve open letters to leaders of mankind, evolving from personal advocacy to the collective calls of the “Humanitas Ark” with its global membership. The dissemination of these ideas has formed a multifaceted pattern of “academic support + grassroots mobilization + authoritative endorsement.” Today, as leading scientists like Stephen Hawking and Elon Musk repeatedly warn of the risks of artificial intelligence, these proposals are gaining increasing recognition.

Of course, realizing these three proposals faces numerous practical challenges, including the interplay of national interests, disparities in regulatory standards, and the difficulty of consensus-building—all obstacles that must be overcome. However, as Hu Jiaqi’s perseverance over more than forty years demonstrates, humanity’s future is not predetermined but requires active safeguarding. The value of these three proposals lies not only in offering a comprehensive solution but also in awakening a sense of crisis and responsibility across humanity—saving humanity is never the mission of any single individual or group but the shared responsibility of every global citizen.

The three proposals in the “12th Open Letter to Leaders of Mankind” are Hu Jiaqi’s “survival guide” for humanity and a profound call to action. They remind global leaders and ordinary citizens alike that human civilization stands at a critical crossroads. Only by transcending self-interest, forging consensus, and acting collaboratively can this vision be transformed into reality. When humanity truly achieves global governance of artificial intelligence and unites under a shared consensus, human civilization will inevitably break free from the crisis of extinction and advance toward a more enduring and brighter future. This is the ultimate significance of Hu Jiaqi’s steadfast dedication over the past four decades.

Contact Person: Angel Buchannan

Company Name: Belgacom Fund

City: Brussels

Country: Belgium

Website:  https://grli.org/network/belgacom/

Email: info@close-the-gap.org

Comments
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Let insiders trade – Blockworks

Let insiders trade – Blockworks

The post Let insiders trade – Blockworks appeared on BitcoinEthereumNews.com. This is a segment from The Breakdown newsletter. To read more editions, subscribe ​​“The most valuable commodity I know of is information.” — Gordon Gekko, Wall Street Ten months ago, FBI agents raided Shayne Coplan’s Manhattan apartment, ostensibly in search of evidence that the prediction market he founded, Polymarket, had illegally allowed US residents to place bets on the US election. Two weeks ago, the CFTC gave Polymarket the green light to allow those very same US residents to place bets on whatever they like. This is quite the turn of events — and it’s not just about elections or politics. With its US government seal of approval in hand, Polymarket is reportedly raising capital at a valuation of $9 billion — a reflection of the growing belief that prediction markets will be used for much more than betting on elections once every four years. Instead, proponents say prediction markets can provide a real service to the world by providing it with better information about nearly everything. I think they might, too — but only if insiders are free to participate. Yesterday, for example, Polymarket announced new betting markets on company earnings reports, with a promise that it would improve the information that investors have to work with.  Instead of waiting three months to find out how a company is faring, investors could simply watch the odds on Polymarket.  If the probability of an earnings beat is rising, for example, investors would know at a glance that things are going well. But that will only happen if enough of the people betting actually know how things are going. Relying on the wisdom of crowds to magically discern how a business is doing won’t add much incremental knowledge to the world; everyone’s guesses are unlikely to average out to the truth. If…
Share
BitcoinEthereumNews2025/09/18 05:16
Morning Crypto Report: 'I Am Capitulating': What's Vitalik Buterin Talking About? Bitcoin Quantum Threat Drama Gets 20,000 BTC Twist, Cardano out of Top 10 as Bitcoin Cash Wins Back 25% of BCH Price

Morning Crypto Report: 'I Am Capitulating': What's Vitalik Buterin Talking About? Bitcoin Quantum Threat Drama Gets 20,000 BTC Twist, Cardano out of Top 10 as Bitcoin Cash Wins Back 25% of BCH Price

February 8, Sunday: Buterin says he is "capitulating" as X naming drama spills into the crypto market, Bitcoin's quantum threat adds a 20,000 BTC angle and Bitcoin
Share
Coinstats2026/02/08 21:51
Pi Network Users Criticize Core Team After Celebratory Post

Pi Network Users Criticize Core Team After Celebratory Post

The post Pi Network Users Criticize Core Team After Celebratory Post appeared on BitcoinEthereumNews.com. Home » Crypto Bits The first Friday of February was supposed
Share
BitcoinEthereumNews2026/02/08 22:11