Stable Diffusion
This model generates and edits images from text prompts using a Latent Diffusion framework. It leverages two fixed, pretrained text encoders — OpenCLIP-ViT/G and CLIP-ViT/L — to understand and translate textual descriptions into visual representations.
api_example.sh
Technical Specifications
Model Architecture & Performance
Pricing
Pay-per-use, no commitments
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| width | number | 1024 | Image width in pixels |
| height | number | 1024 | Image height in pixels |
| steps | number | 30 | Number of denoising steps |
| cfg | number | 7.5 | How closely to follow the prompt |
| seed | number | 50 | Random seed for reproducibility |
| negative_prompt | string | What to exclude from the image. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
Excellent prompt following High quality image generation Good text rendering in images | Still requires prompt engineering High-resolution very complex scenes may need more advanced model or compute |
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid helped us turn a collection of AI scripts into structured production workflows. We now have better reliability, visibility, and control over every run."
AI Infrastructure Team
Automation & Orchestration
