PANews reported on September 12th that Alibaba's Tongyi Qianwen released its next-generation basic model architecture, Qwen3-Next, and open-sourced the Qwen3-Next-80B-A3B series of models based on this architecture. Compared to the Qwen3 MoE model architecture, this architecture features the following core improvements: a hybrid attention mechanism, a highly sparse MoE structure, a series of optimizations for stable and user-friendly training, and a multi-token prediction mechanism to improve inference efficiency. Based on the Qwen3-Next model architecture, Alibaba trained the Qwen3-Next-80B-A3B-Base model, which has 80 billion parameters but only activates 3 billion. This Base model achieves performance similar to or slightly better than the Qwen3-32B dense model, while its training cost (GPU hours) is less than one-tenth of that of the Qwen3-32B. Its inference throughput for contexts above 32k is over ten times that of the Qwen3-32B, achieving exceptional cost-effectiveness for both training and inference.


