Qwen/Qwen3-Max logo

Qwen/Qwen3-Max

Qwen3-Max is Alibaba Cloud's most powerful closed-source model in the Qwen3 series, featuring a 235B Mixture-of-Experts architecture with 22B parameters active per forward pass. It delivers frontier-level performance in complex reasoning, multilingual tasks, long-context understanding, and advanced coding β€” rivaling GPT-4o and Claude Sonnet on major benchmarks. Accessible exclusively via the DashScope API.

Alibaba (Cloud) Chat 128K Tokens
Get API Key
Deposit $5 to get started Unlock API access and start running inference right away. See how many million tokens $5 gets you

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "Qwen/Qwen3-Max",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 4096,
  "stream": true,
  "top_p": 1
}'

Technical Specifications

Model Architecture & Performance

Variant Instruct
Model Size 235B params (22B active)
Context Length 128K Tokens
Quantization Proprietary
Tokens/sec 80
Architecture Sparse Mixture-of-Experts (MoE) Transformer β€” 235B total / 22B active parameters per token
Precision Proprietary (served via DashScope inference infrastructure)
License Proprietary β€” Alibaba Cloud DashScope API only
Release Date April 2025
Developers Alibaba Cloud (QwenLM)

Pricing

Pay-per-use, no commitments

Input Tokens $1.20/1M Tokens
Output Tokens $6.00/1M Tokens
Cached Input Tokens $0.24/1M Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.7 Controls creativity and randomness. Higher values produce more diverse output.
max_tokens number 4096 Maximum number of tokens the model can generate.
top_p number 1 Controls nucleus sampling for more predictable output.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
235B MoE architecture β€” frontier-level intelligence
Up to 128K context window
Strong multilingual performance (29+ languages)
Excellent instruction following and structured output
Hybrid thinking mode for complex reasoning tasks
Competitive with GPT-4o and Claude Sonnet on key benchmarks
Closed-source β€” no self-hosting or weight access
Higher latency than smaller Qwen models
Requires DashScope API access
Higher cost per token vs open-source alternatives

Use cases

Recommended applications for this model

Complex multi-step reasoning
Advanced coding and debugging
Research and analytical writing
Long-document summarization
Multilingual chat and translation
Enterprise chatbots and assistants

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid scaled our personalized outreach from hundreds to tens of thousands of prospects. AI-driven research and content generation doubled our campaign velocity without sacrificing quality."

Demand Generation Team

Marketing & Sales Operations