deepseek-ai/deepseek-r1-distill-llama-70b

DeepSeek R1 Distill LLaMA 70B is optimized for efficient, high-level reasoning and conversational intelligence. It delivers near frontier-level analytical performance while running on significantly smaller hardware.

DeepSeek Chat 64k Tokens
Get API Key
Try in Playground
Free Trial Credit No Credit Card Required
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "deepseek-ai/deepseek-r1-distill-llama-70b",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ],
  "temperature": 0.3,
  "max_tokens": 10000,
  "stream": true,
  "top_p": 1
}'

Technical Specifications

Model Architecture & Performance

Variant Instruct
Model Size 70B params
Context Length 64k Tokens
Quantization fp16
Tokens/Second 386
Architecture LLaMA-3.1-70B (Distilled)
Precision Optimized inference precision
License DeepSeek R1 License (MIT)
Release Date January 2025
Developers DeepSeek

Pricing

Pay-per-use, no commitments

Input Tokens $1.20/1M Tokens
Output Tokens $1.80/1M Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.3 Controls creativity and randomness. Higher values produce more diverse output.
max_tokens number 10000 Defines the maximum number of tokens the model is allowed to generate.
top_p number 1 Nucleus sampling: limits the token selection to a subset of top probability mass.
reasoning_effort select medium Adjusts the depth of reasoning and problem-solving effort. Higher settings yield more thorough responses at the cost of latency.
resoning_summary select auto Controls the verbosity of reasoning explanations. 'Auto' lets the model decide the appropriate level, 'concise' provides brief summaries, and 'detailed' offers in-depth explanations.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
Excellent reasoning and chain-of-thought capability
Lower GPU memory requirement compared to the full model
Strong performance across technical and multilingual tasks
Open-source and suitable for on-prem deployment
Slightly slower than smaller distilled models
Reasoning quality may vary in very complex tasks

Use cases

Recommended applications for this model

Advanced reasoning and problem solving
Conversational AI
Technical and coding assistance
Long-form text generation
Math and logic tasks
Research and analysis

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid AI reduced our document processing time by over 60% and significantly improved retrieval accuracy across our RAG workflows."

Enterprise AI Team

Document Intelligence Platform