Qwen3-Coder 30B A3B

Qwen3-Coder-30B-A3B-Instruct is a sparse Mixture-of-Experts (MoE) model with around 30.5B total parameters (3.3B active per inference), 48 layers, supporting extremely long context (native 262,144 tokens — extendable to 1M in some deployments).

Alibaba Cloud Code 262K Tokens
Get API Key
Try in Playground
Free Trial Credit No Credit Card Required
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/chat/completions" \
  -H "Authorization: Bearer $QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "Qwen/Qwen3-Coder-30B-A3B-Instruct",
  "messages": [
    {
      "role": "user",
      "content": "Write a Python function to calculate fibonacci sequence"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 500
}'

Technical Specifications

Model Architecture & Performance

Model Size 30.5B Params (3.3B active)
Context Length 262K Tokens
Quantization fp16
Tokens/Second 389
License Apache-2.0
Release Date 2025-07-31
Developers QwenLM / Alibaba Cloud

Pricing

Pay-per-use, no commitments

Input Tokens $0.00079/1K Tokens
Output Tokens $0.00079/1K Tokens

API Reference

Complete parameter documentation

ParameterTypeDefaultDescription
streambooleantrueEnable streaming responses for real-time output.
temperaturenumber0.7Controls randomness; higher values produce more diverse but less deterministic output.
max_tokensnumber65536Maximum tokens to generate in the response (suitable for long-form code or large refactors).
top_pnumber0.8Nucleus sampling — controls token sampling diversity.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

StrengthsConsiderations
Massive context window for large codebases or long instructions
Sparse MoE architecture enables efficient inference vs dense 30B models
Strong code generation, tool-calling, and agent-style workflows
Open-source with Apache 2.0 licensing in many distributions
Requires strong GPU and VRAM for full context; risk of OOM with very long inputs
Sparse MoE routing can lead to output variance
Quantization may be required for cost and performance but can reduce quality

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid AI reduced our document processing time by over 60% and significantly improved retrieval accuracy across our RAG workflows."

Enterprise AI Team

Document Intelligence Platform