Qwen3.5 122B A10B API
Released February 24, 2026 | 256K Tokens (up to 1M) context | 122B params (10B active) parameters
Qwen3.5 122B A10B API enables Advanced multimodal reasoning (text + image + video), Enterprise-grade document understanding and OCR, Complex agentic workflows with function calling, Long-horizon planning and analysis (256K context), GUI automation (ScreenSpot Pro: 70.4 vs Claude Sonnet 4.5: 36.2), Scientific and research-grade problem solving, and RAG over massive document repositories. Qwen3.5-122B-A10B is the most powerful open-source model in the Qwen3.5 Medium Series. With 122B total parameters and 10B active per token across a 48-layer hybrid architecture, it delivers the strongest knowledge, vision, and function-calling performance in the medium class — scoring 86.6% on GPQA Diamond (beating GPT-5 mini's 82.8%), 72.2% on BFCL-V4 tool calling (vs GPT-5 mini's 55.5%), 92.1% on OCRBench, and 83.9% on MMMU. Supports text, image, and video input natively via early fusion. Standout strengths include 86.6% GPQA Diamond — beats GPT-5 mini (82.8%) by 4 points and 72.2% BFCL-V4 function calling — 30% ahead of GPT-5 mini (55.5%). It is well suited for multimodal assistants that combine image understanding with grounded text reasoning in real-time workflows.
from openai import OpenAI # Initialize the OpenAI client with Qubrid base URL client = OpenAI( base_url="https://platform.qubrid.com/v1", api_key="QUBRID_API_KEY", ) stream = client.chat.completions.create( model="Qwen/Qwen3.5-122B-A10B", messages=[ { "role": "user", "content": [ { "type": "text", "text": "What is in this image? Describe the main elements." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ], max_tokens=16384, temperature=1, top_p=0.95, stream=True, presence_penalty=1.5 ) for chunk in stream: if chunk.choices and chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) print("\n")from openai import OpenAI # Initialize the OpenAI client with Qubrid base URL client = OpenAI( base_url="https://platform.qubrid.com/v1", api_key="QUBRID_API_KEY", ) stream = client.chat.completions.create( model="Qwen/Qwen3.5-122B-A10B", messages=[ { "role": "user", "content": [ { "type": "text", "text": "What is in this image? Describe the main elements." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ], max_tokens=16384, temperature=1, top_p=0.95, stream=True, presence_penalty=1.5 ) for chunk in stream: if chunk.choices and chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) print("\n") Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."
Clinical AI Team
Research & Clinical Intelligence
