Qwen/Qwen3-Coder-30B-A3B-Instruct
Qwen3-Coder-30B-A3B-Instruct is a sparse Mixture-of-Experts (MoE) model with around 30.5B total parameters (3.3B active per inference), 48 layers, supporting extremely long context (native 262,144 tokens — extendable to 1M in some deployments).
api_example.sh
Technical Specifications
Model Architecture & Performance
Pricing
Pay-per-use, no commitments
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.7 | Controls randomness; higher values produce more diverse but less deterministic output. |
| max_tokens | number | 65536 | Maximum tokens to generate in the response (suitable for long-form code or large refactors). |
| top_p | number | 0.8 | Nucleus sampling — controls token sampling diversity. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| Massive context window — handles large codebases or long instructions Sparse-MoE architecture — efficient inference vs dense 30B models Strong code-generation, tool-calling, agent-style workflows Open-source / Apache 2.0 licensed (in many distributions) | Requires strong GPU / VRAM for full context; may cause OOM with 256K+ context Sparse MoE can vary output depending on expert routing Quantization (FP8 etc.) often needed for cost/perf but may reduce quality |
Use cases
Recommended applications for this model
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."
Clinical AI Team
Research & Clinical Intelligence
