2024-12-30 04:53:29 -07:00
< p align = "center" >
< img src = "githubbanner.png" alt = "Kokoro TTS Banner" >
< / p >
2024-12-30 04:17:50 -07:00
# Kokoro TTS API
2024-12-30 04:53:29 -07:00
[](https://huggingface.co/hexgrad/Kokoro-82M/tree/a67f11354c3e38c58c3327498bc4bd1e57e71c50)
2024-12-30 05:29:35 -07:00
FastAPI wrapper for [Kokoro-82M ](https://huggingface.co/hexgrad/Kokoro-82M ) text-to-speech model.
2024-12-30 04:17:50 -07:00
2024-12-30 04:53:29 -07:00
Dockerized with NVIDIA GPU support, simple queue handling via sqllite, and automatic chunking/stitching on lengthy input/outputs
2024-12-30 04:17:50 -07:00
## Quick Start
```bash
2024-12-30 04:55:55 -07:00
# Start the API (will automatically clone source HF repo via git-lfs)
2024-12-30 04:17:50 -07:00
docker compose up --build
```
2024-12-30 04:53:29 -07:00
Test it out:
2024-12-30 04:17:50 -07:00
```bash
2024-12-30 04:53:29 -07:00
# From host terminal
2024-12-30 04:17:50 -07:00
python examples/test_tts.py "Hello world" --voice af_bella
```
2024-12-30 04:53:29 -07:00
## Performance Benchmarks
2024-12-30 06:48:27 -07:00
Benchmarking was performed solely on generation via the local API (ignoring file transfers) using various text lengths up to ~10 minutes output, measuring processing time, token count, and output audio length. Tests were run on:
2024-12-30 06:26:10 -07:00
- Windows 11 Home w/ WSL2
2024-12-30 04:53:29 -07:00
- NVIDIA 4060Ti 16gb GPU @ CUDA 12.1
- 11th Gen i7-11700 @ 2.5GHz
- 64gb RAM
- Randomized chunks from H.G. Wells - The Time Machine
< p align = "center" >
< img src = "examples/time_vs_output.png" width = "40%" alt = "Processing Time vs Output Length" style = "border: 2px solid #333 ; padding: 10px; margin-right: 1%;" >
< img src = "examples/time_vs_tokens.png" width = "40%" alt = "Processing Time vs Token Count" style = "border: 2px solid #333 ; padding: 10px;" >
< / p >
- Average processing speed: ~3.4 seconds per minute of audio output
- Efficient token processing: ~0.01 seconds per token
- Scales well with longer texts, maintains consistent performance
2024-12-30 04:17:50 -07:00
## API Endpoints
```bash
2024-12-30 04:53:29 -07:00
GET /tts/voices # List available voices
2024-12-30 04:17:50 -07:00
POST /tts # Generate speech
2024-12-30 04:53:29 -07:00
GET /tts/{request_id} # Check generation status
GET /tts/file/{request_id} # Download audio file
2024-12-30 04:17:50 -07:00
```
## Example Usage
2024-12-30 04:53:29 -07:00
List available voices:
2024-12-30 04:17:50 -07:00
```bash
python examples/test_tts.py
```
Generate speech:
```bash
# Default voice
python examples/test_tts.py "Your text here"
# Specific voice
python examples/test_tts.py --voice af_bella "Your text here"
2024-12-30 04:53:29 -07:00
# Get file path without downloading
2024-12-30 04:17:50 -07:00
python examples/test_tts.py --no-download "Your text here"
```
2024-12-30 04:53:29 -07:00
Generated files are saved in:
- With download: `examples/output/`
- Without download: `src/output/` (in API container)
2024-12-30 04:17:50 -07:00
## Requirements
- Docker
- NVIDIA GPU + CUDA
2024-12-30 04:53:29 -07:00
- nvidia-container-toolkit installed on host
## Model
This API uses the [Kokoro-82M ](https://huggingface.co/hexgrad/Kokoro-82M ) model from HuggingFace.
Visit the model page for more details about training, architecture, and capabilities. I have no affiliation with any of their work, and produced this wrapper for ease of use and personal projects.
## License
This project is licensed under the Apache License 2.0 - see below for details:
- The Kokoro model weights are licensed under Apache 2.0 (see [model page ](https://huggingface.co/hexgrad/Kokoro-82M ))
- The FastAPI wrapper code in this repository is licensed under Apache 2.0 to match
- The inference code adapted from StyleTTS2 is MIT licensed
The full Apache 2.0 license text can be found at: https://www.apache.org/licenses/LICENSE-2.0