You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
WhisperX provides fast automatic speech recognition with word-level timestamps and speaker diarization.
This is a BentoML example project, demonstrating how to build a speech recognition inference API server, using the WhisperX project. See here for a full list of BentoML example projects.
Prerequisites
If you want to test the project locally, install FFmpeg on your system.
We have defined a BentoML Service in service.py. Run bentoml serve in your project directory to start the Service.
Please note that you may need to request access to pyannote/segmentation-3.0 and pyannote/speaker-diarization-3.1, then provide your Hugging Face token when running the Service.
After the Service is ready, you can deploy the application to BentoCloud for better management and scalability. Sign up if you haven't got a BentoCloud account.
Make sure you have logged in to BentoCloud and set your Hugging Face access token in bentofile.yaml, then run the following command to deploy it.
bentoml deploy .
Once the application is up and running on BentoCloud, you can access it via the exposed URL.