2024-12-30 04:53:29 -07:00
< p align = "center" >
< img src = "githubbanner.png" alt = "Kokoro TTS Banner" >
< / p >
2024-12-30 04:17:50 -07:00
# Kokoro TTS API
2025-01-03 17:54:17 -07:00
[]()
[]()
2025-01-02 20:27:12 -07:00
[](https://huggingface.co/hexgrad/Kokoro-82M/tree/c3b0d86e2a980e027ef71c28819ea02e351c2667) [](https://huggingface.co/spaces/Remsky/Kokoro-TTS-Zero)
2024-12-30 04:53:29 -07:00
2025-01-01 21:50:00 -07:00
Dockerized FastAPI wrapper for [Kokoro-82M ](https://huggingface.co/hexgrad/Kokoro-82M ) text-to-speech model
- OpenAI-compatible Speech endpoint, with voice combination functionality
2024-12-31 04:51:21 -07:00
- NVIDIA GPU accelerated inference (or CPU) option
2025-01-06 03:52:24 -07:00
- very fast generation time (~30x real time speed via 4060Ti)
2024-12-31 03:46:31 -07:00
- automatic chunking/stitching for long texts
2025-01-06 03:49:31 -07:00
- streaming support w/ variable chunking to control latency
2025-01-01 21:50:00 -07:00
- simple audio generation web ui utility
2024-12-30 04:17:50 -07:00
2025-01-02 20:27:12 -07:00
2025-01-02 15:36:53 -07:00
## Quick Start
2024-12-30 04:17:50 -07:00
2025-01-01 21:50:00 -07:00
The service can be accessed through either the API endpoints or the Gradio web interface.
2024-12-31 04:51:21 -07:00
2025-01-01 21:50:00 -07:00
1. Install prerequisites:
- Install [Docker Desktop ](https://www.docker.com/products/docker-desktop/ ) + [Git ](https://git-scm.com/downloads )
- Clone and start the service:
```bash
git clone https://github.com/remsky/Kokoro-FastAPI.git
cd Kokoro-FastAPI
docker compose up --build
```
2. Run locally as an OpenAI-Compatible Speech Endpoint
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8880",
api_key="not-needed"
)
response = client.audio.speech.create(
model="kokoro",
voice="af_bella",
input="Hello world!",
response_format="mp3"
)
response.stream_to_file("output.mp3")
```
or visit http://localhost:7860
< p align = "center" >
< img src = "ui \GradioScreenShot.png" width = "80%" alt = "Voice Analysis Comparison" style = "border: 2px solid #333 ; padding: 10px;" >
< / p >
2025-01-02 15:36:53 -07:00
## Features
2025-01-01 21:50:00 -07:00
< details >
2025-01-02 15:36:53 -07:00
< summary > OpenAI-Compatible Speech Endpoint< / summary >
2024-12-30 04:53:29 -07:00
2024-12-31 03:41:45 -07:00
```python
2024-12-31 18:55:26 -07:00
# Using OpenAI's Python library
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8880", api_key="not-needed")
response = client.audio.speech.create(
model="kokoro", # Not used but required for compatibility, also accepts library defaults
voice="af_bella",
input="Hello world!",
response_format="mp3"
)
2024-12-30 04:53:29 -07:00
2024-12-31 18:55:26 -07:00
response.stream_to_file("output.mp3")
2024-12-31 03:41:45 -07:00
```
2024-12-31 18:55:26 -07:00
Or Via Requests:
2024-12-31 03:41:45 -07:00
```python
import requests
2024-12-31 19:04:40 -07:00
response = requests.get("http://localhost:8880/v1/audio/voices")
2024-12-31 18:55:26 -07:00
voices = response.json()["voices"]
# Generate audio
2024-12-31 03:41:45 -07:00
response = requests.post(
2024-12-31 14:49:43 -07:00
"http://localhost:8880/v1/audio/speech",
2024-12-31 03:41:45 -07:00
json={
"model": "kokoro", # Not used but required for compatibility
"input": "Hello world!",
"voice": "af_bella",
2024-12-31 03:46:31 -07:00
"response_format": "mp3", # Supported: mp3, wav, opus, flac
2024-12-31 03:41:45 -07:00
"speed": 1.0
}
)
# Save audio
with open("output.mp3", "wb") as f:
f.write(response.content)
```
2024-12-30 04:53:29 -07:00
2025-01-01 21:50:00 -07:00
Quick tests (run from another terminal):
```bash
python examples/test_openai_tts.py # Test OpenAI Compatibility
python examples/test_all_voices.py # Test all available voices
```
< / details >
< details >
2025-01-02 15:36:53 -07:00
< summary > Voice Combination< / summary >
- Averages model weights of any existing voicepacks
- Saves generated voicepacks for future use
2024-12-30 04:17:50 -07:00
2024-12-31 18:55:26 -07:00
Combine voices and generate audio:
```python
import requests
2025-01-01 21:50:00 -07:00
response = requests.get("http://localhost:8880/v1/audio/voices")
voices = response.json()["voices"]
2024-12-30 04:17:50 -07:00
2025-01-01 21:50:00 -07:00
# Create combined voice (saves locally on server)
2024-12-31 18:55:26 -07:00
response = requests.post(
"http://localhost:8880/v1/audio/voices/combine",
2025-01-01 21:50:00 -07:00
json=[voices[0], voices[1]]
2024-12-31 03:41:45 -07:00
)
2024-12-31 18:55:26 -07:00
combined_voice = response.json()["voice"]
2024-12-30 04:17:50 -07:00
2024-12-31 18:55:26 -07:00
# Generate audio with combined voice
response = requests.post(
"http://localhost:8880/v1/audio/speech",
json={
"input": "Hello world!",
"voice": combined_voice,
"response_format": "mp3"
}
)
2024-12-30 04:17:50 -07:00
```
2025-01-01 21:50:00 -07:00
< p align = "center" >
2025-01-06 03:32:41 -07:00
< img src = "assets/voice_analysis.png" width = "80%" alt = "Voice Analysis Comparison" style = "border: 2px solid #333 ; padding: 10px;" >
2025-01-01 21:50:00 -07:00
< / p >
< / details >
< details >
2025-01-02 15:36:53 -07:00
< summary > Multiple Output Audio Formats< / summary >
- mp3
- wav
- opus
- flac
- aac
- pcm
< p align = "center" >
2025-01-06 03:32:41 -07:00
< img src = "assets/format_comparison.png" width = "80%" alt = "Audio Format Comparison" style = "border: 2px solid #333 ; padding: 10px;" >
2025-01-02 15:36:53 -07:00
< / p >
< / details >
< details >
< summary > Gradio Web Utility< / summary >
2025-01-01 21:50:00 -07:00
Access the interactive web UI at http://localhost:7860 after starting the service. Features include:
- Voice/format/speed selection
- Audio playback and download
- Text file or direct input
2024-12-30 04:17:50 -07:00
2025-01-01 21:50:00 -07:00
If you only want the API, just comment out everything in the docker-compose.yml under and including `gradio-ui`
Currently, voices created via the API are accessible here, but voice combination/creation has not yet been added
< / details >
2025-01-06 03:49:31 -07:00
< details >
< summary > Streaming Support< / summary >
```python
# OpenAI-compatible streaming
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8880", api_key="not-needed")
# Stream to file
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice="af_bella",
input="Hello world!"
) as response:
response.stream_to_file("output.mp3")
# Stream to speakers (requires PyAudio)
import pyaudio
player = pyaudio.PyAudio().open(
format=pyaudio.paInt16,
channels=1,
rate=24000,
output=True
)
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice="af_bella",
response_format="pcm",
input="Hello world!"
) as response:
for chunk in response.iter_bytes(chunk_size=1024):
player.write(chunk)
```
Or via requests:
```python
import requests
response = requests.post(
"http://localhost:8880/v1/audio/speech",
json={
"input": "Hello world!",
"voice": "af_bella",
"response_format": "pcm"
},
stream=True
)
for chunk in response.iter_content(chunk_size=1024):
if chunk:
# Process streaming chunks
pass
```
< p align = "center" >
< img src = "assets/gpu_first_token_timeline_openai.png" width = "45%" alt = "GPU First Token Timeline" style = "border: 2px solid #333 ; padding: 10px; margin-right: 1%;" >
< img src = "assets/cpu_first_token_timeline_stream_openai.png" width = "45%" alt = "CPU First Token Timeline" style = "border: 2px solid #333 ; padding: 10px;" >
< / p >
Key Streaming Metrics:
- First token latency @ chunksize
- ~300ms (GPU) @ 400
- ~3500ms (CPU) @ 200
- Adjustable chunking settings for real-time playback
*Note: Artifacts in intonation can increase with smaller chunks*
< / details >
2025-01-02 15:36:53 -07:00
## Processing Details
2025-01-01 21:50:00 -07:00
< details >
2025-01-02 15:36:53 -07:00
< summary > Performance Benchmarks< / summary >
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
Benchmarking was performed on generation via the local API using text lengths up to feature-length books (~1.5 hours output), measuring processing time and realtime factor. Tests were run on:
- Windows 11 Home w/ WSL2
- NVIDIA 4060Ti 16gb GPU @ CUDA 12.1
- 11th Gen i7-11700 @ 2.5GHz
- 64gb RAM
2024-12-31 03:46:31 -07:00
- WAV native output
2024-12-31 03:41:45 -07:00
- H.G. Wells - The Time Machine (full text)
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
< p align = "center" >
2025-01-06 03:32:41 -07:00
< img src = "assets/gpu_processing_time.png" width = "45%" alt = "Processing Time" style = "border: 2px solid #333 ; padding: 10px; margin-right: 1%;" >
< img src = "assets/gpu_realtime_factor.png" width = "45%" alt = "Realtime Factor" style = "border: 2px solid #333 ; padding: 10px;" >
2024-12-31 03:41:45 -07:00
< / p >
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
Key Performance Metrics:
2025-01-06 03:58:08 -07:00
- Realtime Speed: Ranges between 25-50x (generation time to output audio length)
2024-12-31 05:56:22 -07:00
- Average Processing Rate: 137.67 tokens/second (cl100k_base)
2025-01-01 21:50:00 -07:00
< / details >
< details >
2025-01-02 15:36:53 -07:00
< summary > GPU Vs. CPU< / summary >
2024-12-30 04:17:50 -07:00
2025-01-01 21:50:00 -07:00
```bash
2025-01-04 02:46:27 -07:00
# GPU: Requires NVIDIA GPU with CUDA 12.1 support (~35x realtime speed)
2025-01-01 21:50:00 -07:00
docker compose up --build
2025-01-04 02:46:27 -07:00
# CPU: ONNX optimized inference (~2.4x realtime speed)
2025-01-01 21:50:00 -07:00
docker compose -f docker-compose.cpu.yml up --build
```
2025-01-06 03:58:08 -07:00
*Note: Overall speed may have reduced somewhat with the structural changes to accomodate streaming. Looking into it*
2025-01-02 15:36:53 -07:00
< / details >
2025-01-01 21:50:00 -07:00
2025-01-02 15:36:53 -07:00
< details >
< summary > Natural Boundary Detection< / summary >
2024-12-31 04:29:48 -07:00
2025-01-02 15:36:53 -07:00
- Automatically splits and stitches at sentence boundaries
- Helps to reduce artifacts and allow long form processing as the base model is only currently configured for approximately 30s output
2025-01-01 21:50:00 -07:00
< / details >
2024-12-30 04:53:29 -07:00
2025-01-02 15:36:53 -07:00
## Model and License
2025-01-01 21:50:00 -07:00
< details open >
2025-01-02 15:36:53 -07:00
< summary > Model< / summary >
2024-12-30 04:53:29 -07:00
This API uses the [Kokoro-82M ](https://huggingface.co/hexgrad/Kokoro-82M ) model from HuggingFace.
Visit the model page for more details about training, architecture, and capabilities. I have no affiliation with any of their work, and produced this wrapper for ease of use and personal projects.
2025-01-01 21:50:00 -07:00
< / details >
< details >
2025-01-02 15:36:53 -07:00
< summary > License< / summary >
2024-12-30 04:53:29 -07:00
This project is licensed under the Apache License 2.0 - see below for details:
- The Kokoro model weights are licensed under Apache 2.0 (see [model page ](https://huggingface.co/hexgrad/Kokoro-82M ))
- The FastAPI wrapper code in this repository is licensed under Apache 2.0 to match
- The inference code adapted from StyleTTS2 is MIT licensed
The full Apache 2.0 license text can be found at: https://www.apache.org/licenses/LICENSE-2.0
2025-01-01 21:50:00 -07:00
< / details >