google/gemini-2.5-flash logo

google/gemini-2.5-flash

Gemini 2.5 Flash is a cost-efficient multimodal vision model designed for high-volume, low-latency tasks with strong long-context support.

Google Vision Up to 1M Tokens
Get API Key
Deposit $5 to get started Unlock API access and start running inference right away. See how many million tokens $5 gets you

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "google/gemini-2.5-flash",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image? Describe the main elements."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
          }
        }
      ]
    }
  ],
  "max_tokens": 8192,
  "temperature": 0.2,
  "stream": true,
  "top_p": 1
}'

Technical Specifications

Model Architecture & Performance

Variant Gemini 2.5 Flash
Context Length Up to 1M Tokens
Architecture Sparse Mixture-of-Experts (MoE) Transformer with native multimodal support
Developers Google DeepMind

Pricing

Pay-per-use, no commitments

Input Tokens $0.15/1M Tokens
Output Tokens $0.25/1M Tokens
Cached Input Tokens $0.04/1M Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.2 Controls randomness. Higher values can increase creativity.
max_tokens number 8192 Maximum number of tokens to generate in the response.
top_p number 1 Nucleus sampling: considers tokens with top_p probability mass.
reasoning_effort select medium Adjusts the depth of reasoning and problem-solving effort (quality vs latency).

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
Large context window (up to 1M input tokens)
Flash-tier pricing and efficient inference
Supports function calling and structured outputs
May underperform Pro variants on very complex reasoning

Use cases

Recommended applications for this model

Real-time visual analysis for dashboards and customer support
Fast structured extraction from images and documents
Vision-assisted workflows that prioritize latency and throughput
Agentic processing where large context is needed at lower cost

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid scaled our personalized outreach from hundreds to tens of thousands of prospects. AI-driven research and content generation doubled our campaign velocity without sacrificing quality."

Demand Generation Team

Marketing & Sales Operations