GPT-OSS 120B
Introducing gpt-oss-120B, OpenAI's flagship open-weight model in the gpt-oss series, built for advanced reasoning, large-scale agentic workloads, and enterprise-grade automation. With 120B parameters and a highly optimized Mixture-of-Experts (MoE) architecture, it activates 12B parameters during inference, delivering exceptional intelligence while maintaining competitive latency. Designed for complex reasoning, multi-task agents, and long-horizon planning, gpt-oss-120B brings frontier-level capability to commercial and self-hosted deployments.
api_example.sh
Technical Specifications
Model Architecture & Performance
Pricing
Pay-per-use, no commitments
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.7 | Controls randomness. Higher values mean more creative but less predictable output. |
| max_tokens | number | 4096 | Maximum number of tokens to generate in the response. |
| top_p | number | 1 | Nucleus sampling: considers tokens with top_p probability mass. |
| effort | select | medium | Controls how much reasoning effort the model should apply. |
| summary | select | concise | Controls the level of explanation in the reasoning summary. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
High-capacity MoE design for strong reasoning and generalization Optimized activation load for high throughput (12B active parameters) State-of-the-art performance under native FP4 and FP8 quantization Scales across multi-GPU clusters and distributed inference setups Up to 256K context window with efficient sparse attention Superior agentic and planning abilities for sequential decision tasks Built-in support for structured schema-based function calling Apache 2.0 license enabling commercial and derivative use | Higher compute and memory requirements compared to smaller gpt-oss models Latency may increase on single-GPU deployments Fine-tuning recommended for highly specialized enterprise domains |
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid helped us turn a collection of AI scripts into structured production workflows. We now have better reliability, visibility, and control over every run."
AI Infrastructure Team
Automation & Orchestration
