2024-12-30 04:53:29 -07:00
< p align = "center" >
< img src = "githubbanner.png" alt = "Kokoro TTS Banner" >
< / p >
2024-12-30 04:17:50 -07:00
# Kokoro TTS API
2024-12-31 03:41:45 -07:00
[](https://huggingface.co/hexgrad/Kokoro-82M/tree/8228a351f87c8a6076502c1e3b7e72e821ebec9a)
[]()
[]()
2024-12-30 04:53:29 -07:00
2024-12-31 03:46:31 -07:00
FastAPI wrapper for [Kokoro-82M ](https://huggingface.co/hexgrad/Kokoro-82M ) text-to-speech model, providing an OpenAI-compatible endpoint with:
- NVIDIA GPU acceleration enabled
- automatic chunking/stitching for long texts
- very fast generation time (~35-49x RTF)
2024-12-30 04:17:50 -07:00
## Quick Start
2024-12-31 03:41:45 -07:00
1. Install prerequisites:
- Install [Docker Desktop ](https://www.docker.com/products/docker-desktop/ )
- Install [Git ](https://git-scm.com/downloads ) (or download and extract zip)
2024-12-31 04:29:48 -07:00
2. Clone and start the service:
2024-12-30 04:17:50 -07:00
```bash
2024-12-31 03:41:45 -07:00
# Clone repository
git clone https://github.com/remsky/Kokoro-FastAPI.git
cd Kokoro-FastAPI
2024-12-30 04:55:55 -07:00
# Start the API (will automatically clone source HF repo via git-lfs)
2024-12-30 04:17:50 -07:00
docker compose up --build
```
2024-12-31 04:29:48 -07:00
Test all voices (from another terminal):
2024-12-30 04:17:50 -07:00
```bash
2024-12-31 03:41:45 -07:00
python examples/test_all_voices.py
2024-12-30 04:17:50 -07:00
```
2024-12-30 04:53:29 -07:00
2024-12-31 03:41:45 -07:00
Test OpenAI compatibility:
```bash
python examples/test_openai_tts.py
```
2024-12-30 04:53:29 -07:00
2024-12-31 03:41:45 -07:00
## OpenAI-Compatible API
2024-12-30 04:53:29 -07:00
2024-12-31 03:41:45 -07:00
List available voices:
```python
import requests
2024-12-30 04:53:29 -07:00
2024-12-31 03:53:12 -07:00
response = requests.get("http://localhost:8880/audio/voices")
2024-12-31 03:41:45 -07:00
voices = response.json()["voices"]
```
2024-12-30 04:53:29 -07:00
2024-12-31 03:41:45 -07:00
Generate speech:
```python
import requests
response = requests.post(
2024-12-31 03:53:12 -07:00
"http://localhost:8880/audio/speech",
2024-12-31 03:41:45 -07:00
json={
"model": "kokoro", # Not used but required for compatibility
"input": "Hello world!",
"voice": "af_bella",
2024-12-31 03:46:31 -07:00
"response_format": "mp3", # Supported: mp3, wav, opus, flac
2024-12-31 03:41:45 -07:00
"speed": 1.0
}
)
# Save audio
with open("output.mp3", "wb") as f:
f.write(response.content)
```
2024-12-30 04:53:29 -07:00
2024-12-31 03:41:45 -07:00
Using OpenAI's Python library:
```python
from openai import OpenAI
2024-12-30 04:17:50 -07:00
2024-12-31 03:53:12 -07:00
client = OpenAI(base_url="http://localhost:8880", api_key="not-needed")
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
response = client.audio.speech.create(
model="kokoro", # Not used but required for compatibility, also accepts library defaults
voice="af_bella",
input="Hello world!",
response_format="mp3"
)
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
response.stream_to_file("output.mp3")
2024-12-30 04:17:50 -07:00
```
2024-12-31 03:41:45 -07:00
## Performance Benchmarks
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
Benchmarking was performed on generation via the local API using text lengths up to feature-length books (~1.5 hours output), measuring processing time and realtime factor. Tests were run on:
- Windows 11 Home w/ WSL2
- NVIDIA 4060Ti 16gb GPU @ CUDA 12.1
- 11th Gen i7-11700 @ 2.5GHz
- 64gb RAM
2024-12-31 03:46:31 -07:00
- WAV native output
2024-12-31 03:41:45 -07:00
- H.G. Wells - The Time Machine (full text)
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
< p align = "center" >
< img src = "examples/benchmarks/processing_time.png" width = "45%" alt = "Processing Time" style = "border: 2px solid #333 ; padding: 10px; margin-right: 1%;" >
< img src = "examples/benchmarks/realtime_factor.png" width = "45%" alt = "Realtime Factor" style = "border: 2px solid #333 ; padding: 10px;" >
< / p >
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
Key Performance Metrics:
- Realtime Factor: Ranges between 35-49x (generation time to output audio length)
- Average Processing Rate: 137.67 tokens/second
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
## Features
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
- OpenAI-compatible API endpoints
- GPU-accelerated inference
2024-12-31 04:29:48 -07:00
- Multiple audio formats: mp3, wav, opus, flac, (aac & pcm not implemented)
- Natural Boundary Detection:
- Automatically splits and stitches at sentence boundaries to reduce artifacts and maintain performacne
2024-12-30 04:53:29 -07:00
## Model
This API uses the [Kokoro-82M ](https://huggingface.co/hexgrad/Kokoro-82M ) model from HuggingFace.
Visit the model page for more details about training, architecture, and capabilities. I have no affiliation with any of their work, and produced this wrapper for ease of use and personal projects.
## License
This project is licensed under the Apache License 2.0 - see below for details:
- The Kokoro model weights are licensed under Apache 2.0 (see [model page ](https://huggingface.co/hexgrad/Kokoro-82M ))
- The FastAPI wrapper code in this repository is licensed under Apache 2.0 to match
- The inference code adapted from StyleTTS2 is MIT licensed
The full Apache 2.0 license text can be found at: https://www.apache.org/licenses/LICENSE-2.0
2024-12-31 04:24:09 -07:00
## Sample
< div align = "center" ; " >
https://user-images.githubusercontent.com/338912d2-90f3-41fb-bca0-5db7b4e02287.mp4
< / div >