NVIDIA Nemotron 3 Nano 30B A3B BF16 API
Released December 15, 2025 | 262k Tokens context | 31.6B Total / 3.2B Active parameters
NVIDIA Nemotron 3 Nano 30B A3B BF16 API enables Agentic AI systems and multi-agent orchestration, Complex reasoning and problem-solving tasks, Code generation, debugging, and optimization, Function calling and tool integration, Long-document analysis and RAG applications, Mathematical reasoning and STEM tasks, Instruction following and task automation, Enterprise chatbots with reasoning capabilities, Financial analysis and decision support, and Software development assistants. Nemotron 3 Nano 30B-A3B is NVIDIA's flagship open reasoning model, featuring a revolutionary hybrid Mamba-Transformer Mixture-of-Experts architecture. With 31.6B total parameters but only 3.2B active per forward pass, it delivers up to 3.3× higher throughput than comparable models while achieving state-of-the-art accuracy on reasoning, coding, and agentic benchmarks. The model supports up to 1M token context length and features configurable reasoning depth with thinking budget control. Standout strengths include Hybrid Mamba-2 + Transformer MoE architecture for optimal efficiency and 3.3× faster inference than Qwen3-30B-A3B with better accuracy. It is optimized for production agent and assistant workloads where response quality, latency, and predictable operating cost all matter.
from openai import OpenAI # Initialize the OpenAI client with Qubrid base URL client = OpenAI( base_url="https://platform.qubrid.com/v1", api_key="QUBRID_API_KEY", ) stream = client.chat.completions.create( model="nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16", messages=[ { "role": "user", "content": "Explain quantum computing in simple terms" } ], max_tokens=8192, temperature=0.3, top_p=1, stream=True ) for chunk in stream: if chunk.choices and chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) print("\n")from openai import OpenAI # Initialize the OpenAI client with Qubrid base URL client = OpenAI( base_url="https://platform.qubrid.com/v1", api_key="QUBRID_API_KEY", ) stream = client.chat.completions.create( model="nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16", messages=[ { "role": "user", "content": "Explain quantum computing in simple terms" } ], max_tokens=8192, temperature=0.3, top_p=1, stream=True ) for chunk in stream: if chunk.choices and chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) print("\n") Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."
Clinical AI Team
Research & Clinical Intelligence
