Pricing is for per inference request. Get 500 requests for $5!
Whisper-small is a Transformer-based encoder-decoder model, also referred to as a sequence-to-sequence model. It was trained on 1 million hours of weakly labeled audio and 4 million hours of pseudolabeled audio collected using Whisper-small. The models were trained on either English-only data or multilingual data. The English-only models were trained on the task of speech recognition. The multilingual models were trained on both speech recognition and speech translation. For speech recognition, the model predicts transcriptions in the same language as the audio. For speech translation, the model predicts transcriptions to a different language than the audio.
Whisper-small on Qubrid AI Model Studio
We have simplified how you can use or fine-tune Whisper-small on our AI Model Studio running on Qubrid’s AI Cloud. Powered by NVIDIA GPUs, we offer you performance with simplicity so you can build your Whisper-small applications quickly without the need to setup or install anything. Login now to inference or fine tune this model – no programming needed.
Learn how to Fine-tune Whisper-small on Qubrid AI Platform
AI Model Author: Open AI
This model is not owned or developed by Qubrid AI. This model has been developed and built to a third-party’s requirements for this application and use case.