Manage GPUs, Deploy AI Packages or Fine-Tune and Train AI Models directly on On-Prem GPU Server Appliances

Simplified AI Systems & Datacenter Management: Empowering AI & IT Admins

By streamlining datacenter management, simplifying developer resource allocation, and offering comprehensive GPU control, the Qubrid AI On-Premise GPU Management & System Controller empowers IT administrators to optimize their infrastructure for maximum efficiency and productivity.

  • Single Pane of Glass: Effortlessly manage all GPU servers across your datacenters from a centralized console.
  • Automated Deployment: Deploy GPU clusters with a few clicks, streamlining your infrastructure setup.
  • Centralized Updates: Maintain consistent and up-to-date systems across your datacenter with a user-friendly interface.
  • Seamless Updates: Effortlessly update operating systems, GPU drivers, Python versions, and common packages across all servers.
  • Developer-Centric Resource Management:
  • Flexible Container Provisioning: Provision tailored compute containers for individual developers, ensuring consistent IT standards and customization options.
  • Resource Allocation Control: Allocate compute and GPU resources to developers based on their needs, eliminating resource contention and maximizing utilization.
  • Advanced GPU Management:
  • Fine-Grained Control: Create GPU compute resources ranging from individual GPU fractions to clusters with multiple GPUs.
  • Comprehensive Monitoring: Track GPU and system resource usage across multiple nodes, gaining valuable insights into performance and resource utilization.
  • Unified GPU Management: Manage diverse GPU types (e.g., NVIDIA H100, A100, L40S) from a single console, simplifying operations and reducing complexity.

Download F.A.Q

Fully integrated AI Factory Experience

Manage your interactions with popular AI models with intuitive user interfaces.

  • Fully Integrated into NVIDIA Enterprise catalog (NIM, CUDA, Nemo, etc.)  
  • Library of published Open-Source AI Models for tuning and inferencing available on-demand 
  • Fine-tune AI models on your local GPU server or scale across thousands of GPUs

NVIDIA NIM Microservices Integrated

Part of NVIDIA AI Enterprise, NVIDIA NIM is a set of easy-to-use inference microservices for accelerating the deployment of foundation models on any cloud or data center and helping to keep your data secure.

  • Fully Integrated NVIDIA Enterprise catalog (NIM, CUDA, Nemo, etc.)  
  • Note – NVIDIA NIM requires separate Enterprise AI license – please contact us for more info.

Easily Deploy Hugging Face AI Models On Your GPU Appliance

The Qubrid AI Controller software allows you to easily deploy AI models of your choice from the Hugging Face repository. Simply enter the AI model ID, select number of GPUs and deploy the model. You can then do inferencing on these models on any GPU node in your infrastructure. Qubrid AI offers you the choice of curated open-source AI models, NVIDIA optimized NIM catalog or selection from thousands of models on Hugging Face – all deployable and manageable from the same software.

No-Code Fine-Tuning and RAG

Fine-tuning an AI model does not have to be hard. With our AI appliances, you don’t have to be a programmer or data scientist to fine-tune a model. same for RAG – just upload your departmental data and hit a button to take advantage of close to real-time RAG capabilities.

  • Simple no-code fine tuning and RAG but with ability for advanced coding in Jupyter notebook 
  • Name your fine-tune model and save as templates
  • Input multiple data types such as pdf files, images etc with a push of a button

Accelerate Deep Learning & Machine Learning Applications

No more headaches of managing AI packages. Even with factory loaded packages, it is hard to manage and update these packages manually. The AI Controller automates that for you.

  • One touch deployment of complete AI/ML Deep Learning packages (PyTorch, TensorFlow, Keras, etc.) 
  • Automated install and update for your discovery and research needs 
  •  Continuous addition of new open-source tools

Scalability and Flexibility through Hybrid Cloud

Adapt to changing demands and scale your data center infrastructure seamlessly. 

  • Utilize lightweight containers with GUI support, ensuring efficient resource utilization and enhanced user experience. 
  • Burst to Qubrid AI Hub (PaaS Cloud) as your demand increases to provide similar GPU capacity to your applications/workloads
  • Consistent experience using private AI infrastructure or the PaaS AI Hub, providing similar GPU management and AI Factory interaction 

Easy Installation & Setup

Simple installation steps – get started in minutes,

Minimum System Requirements

Ensure your system meets the following requirements to install Qubrid AI – DC Controller:

  • 64-bit kernel and CPU support for virtualization.
  • At least 4 GB of RAM.
  • Minimum 4 CPU Cores.
  • Ubuntu 22.04 LTS
  • Python 3.10 and above
Shopping Cart
Scroll to Top