BEIJING, Dec. 27, 2025 /PRNewswire/ — On December 22, Z.ai released GLM-4.7, the latest iteration of its GLM large language model family. Designed to handle multiBEIJING, Dec. 27, 2025 /PRNewswire/ — On December 22, Z.ai released GLM-4.7, the latest iteration of its GLM large language model family. Designed to handle multi

Z.ai Releases GLM-4.7 Designed for Real-World Development Environments, Cementing Itself as “China’s OpenAI”

2025/12/27 21:30
5 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

BEIJING, Dec. 27, 2025 /PRNewswire/ — On December 22, Z.ai released GLM-4.7, the latest iteration of its GLM large language model family. Designed to handle multi-step tasks in production, GLM-4.7 targets development environments that involve lengthy task cycles, frequent tool use, and higher demands for stability and consistency.

Built on GLM-4.6 with a Focus on Complex Development

GLM-4.7 is step forward over GLM-4.6 with improved functions for developers. It features robust support for coding workflows, complex reasoning, and agentic-style execution, giving the model greater consistency even in long, multi-step tasks, as well as more stable behavior when interacting with external tools. For developers, this means GLM-4.7 is a reliable tool for everyday production.

The improvements extend beyond technical performance. GLM-4.7 also produces natural and engaging output for conversational, writing, and role-playing scenarios, evolving GLM towards a coherent open-source system. 

Designed for Real Development Workflows

Expectations for model quality have become a central focus for developers. In addition to following prompts or plans, a model needs to call the right tools and remain consistent across long, multi-step tasks. As task cycles lengthen, even minor errors can have far-reaching impacts, driving up debugging costs and stretching delivery timelines. GLM-4.7 was trained and evaluated with these real-world constraints in mind.

In multi-language programming and terminal-based agent environments, the model shows greater stability across extended workflows. It already supports “think-then-act” execution patterns within widely used coding frameworks such as Claude Code, Cline, Roo Code, TRAE and Kilo Code, aligning more closely with how developers approach complex tasks in practice.

Z.ai evaluated GLM-4.7 on 100 real programming tasks in a Claude Code-based development environment, covering frontend, backend and instruction-following scenarios. Compared with GLM-4.6, the new model delivers clear gains in task completion rates and behavioral consistency. This reduces the need for repeated prompt adjustments and allows developers to focus more directly on delivery. Due to its excellent results, GLM-4.7 has been selected as the default model for the GLM Coding Plan.

Reliable Performance Across Tool Use and Coding Benchmarks

Across a range of code generation and tool use benchmarks, GLM-4.7 delivers competitive overall performance. On BrowseComp, a benchmark focused on web-based tasks, the model scores 67.5. On τ²-Bench, which evaluates interactive tool use, GLM-4.7 achieves a score of 87.4, the highest reported result among publicly available open-source models to date.

In major programming benchmarks including SWE-bench Verified, LiveCodeBench v6, and Terminal Bench 2.0, GLM-4.7 performs at or above the level of Claude Sonnet 4.5, while showing clear improvements over GLM-4.6 across multiple dimensions.

On Code Arena, a large-scale blind evaluation platform with more than one million participants, GLM-4.7 ranks first among open-source models and holds the top position among models developed in China.

More Predictable and Controllable Reasoning

GLM-4.7 introduces more fine-grained control over how the model reasons through long-running and complex tasks. As artificial intelligence systems integrate into production workflows, such capabilities have become an increasing focus for developers. GLM-4.7 is able to maintain consistency in its reasoning across multiple interactions, while also adjusting the depth of reasoning according to task complexity. This makes its behavior within agentic systems more predictable over time. Additionally, Z.ai is actively exploring new ways to deploy AI at scale as it develops and refine the GLM series.

Improvements in Front-end Generation and General Capabilities

Beyond functional correctness, GLM-4.7 shows a noticeably more mature understanding of visual structure and established front-end design conventions. In tasks such as generating web pages or presentation materials, the model tends to produce layouts with more consistent spacing, clearer hierarchy, and more coherent styling, reducing the need for manual revisions downstream.

At the same time, improvements in conversational quality and writing style have broadened the model’s range of use cases. These changes make GLM-4.7 more suitable for creative and interactive applications.

Ecosystem Integration and Open Access

GLM-4.7 is available via the BigModel.cn API and is fully integrated into the Z.ai full-stack development environment. Developers and partners across the global ecosystem have already incorporated the GLM Coding Plan into their tools, including platforms such as TRAE, Cerebras, YouWare, Vercel, OpenRouter and CodeBuddy. Adoption across developer tools, infrastructure providers and application platforms suggests that GLM-4.7 is being used into wider engineering and product use.

Z.ai to Become the “World’s First Large-Model Public Company”

Z.ai has announced that it aims to become the world’s first publicly listed large-model company by listing on the Stock Exchange of Hong Kong. This planned IPO marks the first time capital markets will welcome a listed company whose core business is the independent development of AGI foundation models.

In 2022, 2023, and 2024, Z.ai respectively earned 57.4 million RMB (~8.2 million USD), 124.5 million RMB (~17.7 million USD), and 312.4 million RMB (~44.5 million USD) in revenue. Between 2022 and 2024, the company’s compound annual revenue growth rate (CAGR) reached 130%. Revenue for the first half of 2025 was 190 million RMB (~27 million USD), marking three consecutive years of doubling revenue. During the reporting period, the company’s large-model-related business was its key growth driver.

GLM-4.7 Availability

Default Model for Coding Plan: https://z.ai/subscribe

Try it now: https://chat.z.ai/

Weights: https://huggingface.co/zai-org/GLM-4.7

Technical blog: https://z.ai/blog/glm-4.7

About Z.ai

Founded in 2019, Z.ai originated from the commercialization of technological achievements at Tsinghua University. Its team are pioneers in launching large-model research in China. Leveraging its original GLM (General Language Model) pre-training architecture, Z.ai has built a full-stack model portfolio covering language, code, multimodality, and intelligent agents. Its models are compatible with more than 40 domestically produced chips, making it one of the few Chinese companies whose technical roadmap remains in step with global top-tier standards.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/zai-releases-glm-4-7-designed-for-real-world-development-environments-cementing-itself-as-chinas-openai-302649821.html

SOURCE Z.ai

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Santander UK Announces Intention to Appoint Nicola Bannister as New TSB CEO

Santander UK Announces Intention to Appoint Nicola Bannister as New TSB CEO

Santander UK announced its intention to appoint Nicola Bannister as the new Chief Executive Officer of TSB Bank The post Santander UK Announces Intention to Appoint
Share
ffnews2026/03/03 08:00
CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

The post CEO Sandeep Nailwal Shared Highlights About RWA on Polygon appeared on BitcoinEthereumNews.com. Polygon CEO Sandeep Nailwal highlighted Polygon’s lead in global bonds, Spiko US T-Bill, and Spiko Euro T-Bill. Polygon published an X post to share that its roadmap to GigaGas was still scaling. Sentiments around POL price were last seen to be bearish. Polygon CEO Sandeep Nailwal shared key pointers from the Dune and RWA.xyz report. These pertain to highlights about RWA on Polygon. Simultaneously, Polygon underlined its roadmap towards GigaGas. Sentiments around POL price were last seen fumbling under bearish emotions. Polygon CEO Sandeep Nailwal on Polygon RWA CEO Sandeep Nailwal highlighted three key points from the Dune and RWA.xyz report. The Chief Executive of Polygon maintained that Polygon PoS was hosting RWA TVL worth $1.13 billion across 269 assets plus 2,900 holders. Nailwal confirmed from the report that RWA was happening on Polygon. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 The X post published by Polygon CEO Sandeep Nailwal underlined that the ecosystem was leading in global bonds by holding a 62% share of tokenized global bonds. He further highlighted that Polygon was leading with Spiko US T-Bill at approximately 29% share of TVL along with Ethereum, adding that the ecosystem had more than 50% share in the number of holders. Finally, Sandeep highlighted from the report that there was a strong adoption for Spiko Euro T-Bill with 38% share of TVL. He added that 68% of returns were on Polygon across all the chains. Polygon Roadmap to GigaGas In a different update from Polygon, the community…
Share
BitcoinEthereumNews2025/09/18 01:10
XRP Community Reacts as Ripple Prime Joins NSCC Directory

XRP Community Reacts as Ripple Prime Joins NSCC Directory

The post XRP Community Reacts as Ripple Prime Joins NSCC Directory appeared on BitcoinEthereumNews.com. Kelvin is a crypto journalist/editor with over six years
Share
BitcoinEthereumNews2026/03/03 17:34