zai-org/GLM-5
GLM-5 is Zhipu AI's February 2026 flagship — a 744B-parameter sparse MoE (40B active) with Interleaved (deep) thinking that fuses DeepSeek Sparse Attention and Multi-Token Prediction for frontier reasoning over a 200K-token window.
api_example.sh
Pricing
Pay-per-use, no commitments
Technical Specifications
Model Architecture & Performance
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.7 | Controls randomness. |
| max_tokens | number | 4096 | Maximum number of tokens to generate. |
| top_p | number | 1 | Controls nucleus sampling. |
| enable_thinking | boolean | false | Toggle chain-of-thought reasoning mode. Set temperature=1.0 when enabled. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| 744B MoE with 40B active parameters delivers frontier-level quality with efficient routing Interleaved/Deep Thinking keeps intermediate reasoning while exposing a toggle to control verbosity 200K token window supports persistent context across large codebases or knowledge stores Trained on 28.5T tokens with upgraded tool streaming and multi-agent orchestration support | Full checkpoint requires ~1.65TB of GPU memory for 200K context, limiting on-prem deployments Interleaved thinking increases latency and token usage when enabled Higher power and networking demand compared with slimmer GLM-4.x releases |
Use cases
Recommended applications for this model
Build with zai-org/GLM-5 faster
Get deployment recipes, benchmark alerts, and GPU pricing updates for zai-org/GLM-5 (Glm 5) and other chat models straight from the Qubrid team.
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid enabled us to deploy production AI agents with reliable tool-calling and step tracing. We now ship agents faster with full visibility into every decision and API call."
AI Agents Team
Agent Systems & Orchestration
