NVIDIA Nemotron 3 Super 120B A12B

NVIDIA Nemotron 3 Super 120B A12B API

Released March 11, 2026256K Tokens (up to 1M) context120B params (12B active) parameters

Documentation

NVIDIA Nemotron 3 Super 120B A12B API enables Agentic workflows & multi-agent collaboration, Long-context reasoning (up to 1M tokens), IT ticket automation & high-volume enterprise workloads, Complex tool use & multi-step function calling, RAG (Retrieval-Augmented Generation), and Software engineering & cybersecurity triaging. NVIDIA Nemotron-3-Super-120B-A12B is an open-weight LLM built for agentic reasoning and high-volume workloads. Using a hybrid LatentMoE architecture (Mamba-2 + MoE + Attention) with Multi-Token Prediction (MTP) and native NVFP4 pretraining on 25T tokens, it delivers up to 2.2x higher throughput than GPT-OSS-120B and 7.5x higher than Qwen3.5-122B. With a native 1M-token context window and configurable thinking mode, it is purpose-built for collaborative agents, long-context reasoning, and IT automation across 7 languages. Standout strengths include LatentMoE: 512 experts / 22 active per token at same compute cost as standard MoE and 2.2x throughput vs GPT-OSS-120B; 7.5x vs Qwen3.5-122B. It is optimized for production agent and assistant workloads where response quality, latency, and predictable operating cost all matter.

from openai import OpenAI # Initialize the OpenAI client with Qubrid base URL client = OpenAI( base_url="https://platform.qubrid.com/v1", api_key="QUBRID_API_KEY", ) stream = client.chat.completions.create( model="nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8", messages=[ { "role": "user", "content": "Explain quantum computing in simple terms" } ], max_tokens=16000, temperature=1, top_p=0.95, stream=True ) for chunk in stream: if chunk.choices and chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) print("\n")

Serverless

API access

INPUT$0.10 /1M
CACHED INPUT$0.04 /1M
OUTPUT$0.50 /1M
Deploy using API

Dedicated

Cloud GPU VM

Price starts at$1.25 / GPU/ hr
Deploy with GPU VM

Interactive

Playground

INPUT$0.10 /1M
CACHED INPUT$0.04 /1M
OUTPUT$0.50 /1M
Chat in Playground

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid helped us turn a collection of AI scripts into structured production workflows. We now have better reliability, visibility, and control over every run."

AI Infrastructure Team

Automation & Orchestration