Qwen/Qwen3-Coder-30B-A3B-Instruct

Qwen3-Coder-30B-A3B-Instruct is a sparse Mixture-of-Experts (MoE) model with around 30.5B total parameters (3.3B active per inference), 48 layers, supporting extremely long context (native 262,144 tokens — extendable to 1M in some deployments).

Alibaba Cloud Code N/A
Get API Key
Try in Playground
Free Trial Credit No Credit Card Required
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "Qwen/Qwen3-Coder-30B-A3B-Instruct",
  "messages": [
    {
      "role": "user",
      "content": "Write a Python function to calculate fibonacci sequence"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 65536,
  "stream": true,
  "top_p": 0.8
}'

Technical Specifications

Model Architecture & Performance

Variant instruct
Model Size 1.1B Params
Context Length N/A
Quantization fp16
Tokens/Second 389
Architecture Mixture-of-Experts (MoE) Transformer, 48 layers, GQA attention, 128 experts (8 active per forward pass)
Precision bfloat16 / FP8 (quantized variants available)
License Apache-2.0
Release Date 2025-07-31
Developers QwenLM / Alibaba Cloud

Pricing

Pay-per-use, no commitments

Input Tokens $0.79/1M Tokens
Output Tokens $0.79/1M Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.7 Controls randomness; higher values produce more diverse but less deterministic output.
max_tokens number 65536 Maximum tokens to generate in the response (suitable for long-form code or large refactors).
top_p number 0.8 Nucleus sampling — controls token sampling diversity.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
Massive context window — handles large codebases or long instructions
Sparse-MoE architecture — efficient inference vs dense 30B models
Strong code-generation, tool-calling, agent-style workflows
Open-source / Apache 2.0 licensed (in many distributions)
Requires strong GPU / VRAM for full context; may cause OOM with 256K+ context
Sparse MoE can vary output depending on expert routing
Quantization (FP8 etc.) often needed for cost/perf but may reduce quality

Use cases

Recommended applications for this model

Repository-scale code generation / refactoring
Large codebase analysis
Multi-file transformations
Complex code-generation / writing
Tool-calling / script / automation generation

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."

Clinical AI Team

Research & Clinical Intelligence