nvidia/NVIDIA-Nemotron-3-Super-120B-A12B
NVIDIA Nemotron-3-Super-120B-A12B is an open-weight LLM built for agentic reasoning and high-volume workloads. Using a hybrid LatentMoE architecture (Mamba-2 + MoE + Attention) with Multi-Token Prediction (MTP) and native NVFP4 pretraining on 25T tokens, it delivers up to 2.2x higher throughput than GPT-OSS-120B and 7.5x higher than Qwen3.5-122B. With a native 1M-token context window and configurable thinking mode, it is purpose-built for collaborative agents, long-context reasoning, and IT automation across 7 languages.
api_example.sh
Pricing
Pay-per-use, no commitments
Technical Specifications
Model Architecture & Performance
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 1 | Controls randomness in output. Recommended: 1.0 for all tasks. |
| max_tokens | number | 16000 | Maximum tokens to generate. |
| top_p | number | 0.95 | Controls nucleus sampling. Recommended: 0.95 for all tasks. |
Resources
Learn, watch, and build faster
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| LatentMoE: 512 experts / 22 active per token at same compute cost as standard MoE 2.2x throughput vs GPT-OSS-120B; 7.5x vs Qwen3.5-122B 60.47% SWE-Bench Verified (OpenHands); 83.73% MMLU-Pro; 79.23% GPQA Native 1M token context — 91.75% on RULER @ 1M vs GPT-OSS-120B's 22.30% MTP speculative decoding: 3.45 avg acceptance length (up to 3x wall-clock speedup) Configurable reasoning mode via enable_thinking=True/False in chat template | Requires minimum 2× H100-80GB GPUs for local deployment Thinking mode adds latency overhead; low-effort mode recommended for simple queries Not optimized for vision or multimodal inputs |
Use cases
Recommended applications for this model
Build with nvidia/NVIDIA-Nemotron-3-Super-120B-A12B faster
Get deployment recipes, benchmark alerts, and GPU pricing updates for nvidia/NVIDIA-Nemotron-3-Super-120B-A12B (Nvidia Nemotron 3 Super 120b A12b) and other chat models straight from the Qubrid team.
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid helped us turn a collection of AI scripts into structured production workflows. We now have better reliability, visibility, and control over every run."
AI Infrastructure Team
Automation & Orchestration
