deepseek-ai/deepseek-r1-distill-llama-70b
DeepSeek R1 Distill LLaMA 70B is optimized for efficient, high-level reasoning and conversational intelligence. It delivers near frontier-level analytical performance while running on significantly smaller hardware.
api_example.sh
Technical Specifications
Model Architecture & Performance
Pricing
Pay-per-use, no commitments
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.3 | Controls creativity and randomness. Higher values produce more diverse output. |
| max_tokens | number | 10000 | Defines the maximum number of tokens the model is allowed to generate. |
| top_p | number | 1 | Nucleus sampling: limits the token selection to a subset of top probability mass. |
| reasoning_effort | select | medium | Adjusts the depth of reasoning and problem-solving effort. Higher settings yield more thorough responses at the cost of latency. |
| resoning_summary | select | auto | Controls the verbosity of reasoning explanations. 'Auto' lets the model decide the appropriate level, 'concise' provides brief summaries, and 'detailed' offers in-depth explanations. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| Excellent reasoning and chain-of-thought capability Lower GPU memory requirement compared to the full model Strong performance across technical and multilingual tasks Open-source and suitable for on-prem deployment | Slightly slower than smaller distilled models Reasoning quality may vary in very complex tasks |
Use cases
Recommended applications for this model
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid AI reduced our document processing time by over 60% and significantly improved retrieval accuracy across our RAG workflows."
Enterprise AI Team
Document Intelligence Platform
