2024-12-30 04:53:29 -07:00
< p align = "center" >
< img src = "githubbanner.png" alt = "Kokoro TTS Banner" >
< / p >
2025-01-14 03:47:25 -07:00
# <sub><sub>_`FastKoko`_ </sub></sub>
2025-04-04 16:49:10 -06:00
[]()
[]()
2025-02-06 01:22:21 -07:00
[](https://huggingface.co/spaces/Remsky/Kokoro-TTS-Zero)
2024-12-30 04:53:29 -07:00
2025-04-04 16:49:10 -06:00
[](https://github.com/hexgrad/kokoro)
[](https://github.com/hexgrad/misaki)
2025-02-09 23:01:53 -07:00
2025-02-07 02:32:34 -07:00
[](https://huggingface.co/hexgrad/Kokoro-82M/commit/9901c2b79161b6e898b7ea857ae5298f47b8b0d6)
2025-02-09 23:01:53 -07:00
2025-01-01 21:50:00 -07:00
Dockerized FastAPI wrapper for [Kokoro-82M ](https://huggingface.co/hexgrad/Kokoro-82M ) text-to-speech model
2025-02-13 03:12:45 -07:00
- Multi-language support (English, Japanese, Korean, Chinese, _Vietnamese soon_ )
2025-02-06 01:22:21 -07:00
- OpenAI-compatible Speech endpoint, NVIDIA GPU accelerated or CPU inference with PyTorch
- ONNX support coming soon, see v0.1.5 and earlier for legacy ONNX support in the interim
2025-02-07 02:32:34 -07:00
- Debug endpoints for monitoring system stats, integrated web UI on localhost:8880/web
2025-02-06 01:22:21 -07:00
- Phoneme-based audio generation, phoneme generation
2025-02-13 03:11:11 -07:00
- Per-word timestamped caption generation
- Voice mixing with weighted combinations
### Integration Guides
2025-02-13 20:38:45 -07:00
[](https://github.com/remsky/Kokoro-FastAPI/wiki/Setup-Kubernetes) [](https://github.com/remsky/Kokoro-FastAPI/wiki/Integrations-DigitalOcean) [](https://github.com/remsky/Kokoro-FastAPI/wiki/Integrations-SillyTavern)
[](https://github.com/remsky/Kokoro-FastAPI/wiki/Integrations-OpenWebUi)
2025-01-30 05:47:28 -07:00
## Get Started
2025-01-30 23:06:55 -07:00
< details >
2025-01-30 05:47:28 -07:00
< summary > Quickest Start (docker run)< / summary >
Pre built images are available to run, with arm/multi-arch support, and baked in models
Refer to the core/config.py file for a full list of variables which can be managed via the environment
```bash
2025-04-05 04:09:28 -06:00
# the `latest` tag can be used, though it may have some unexpected bonus features which impact stability.
Named versions should be pinned for your regular usage.
Feedback/testing is always welcome
2025-01-30 05:47:28 -07:00
2025-04-05 04:09:28 -06:00
docker run -p 8880:8880 ghcr.io/remsky/kokoro-fastapi-cpu:latest # CPU, or:
docker run --gpus all -p 8880:8880 ghcr.io/remsky/kokoro-fastapi-gpu:latest #NVIDIA GPU
2025-01-30 05:47:28 -07:00
```
< / details >
< details >
< summary > Quick Start (docker compose) < / summary >
2024-12-31 04:51:21 -07:00
2025-01-15 03:06:17 -07:00
1. Install prerequisites, and start the service using Docker Compose (Full setup including UI):
2025-01-23 04:11:31 -07:00
- Install [Docker ](https://www.docker.com/products/docker-desktop/ )
2025-01-14 16:39:17 +01:00
- Clone the repository:
2025-01-01 21:50:00 -07:00
```bash
git clone https://github.com/remsky/Kokoro-FastAPI.git
cd Kokoro-FastAPI
2025-01-14 00:38:44 -07:00
2025-02-06 01:22:21 -07:00
cd docker/gpu # For GPU support
# or cd docker/cpu # For CPU support
2025-01-28 13:52:57 -07:00
docker compose up --build
2025-01-30 05:47:28 -07:00
2025-03-10 00:20:42 +03:00
# *Note for Apple Silicon (M1/M2) users:
# The current GPU build relies on CUDA, which is not supported on Apple Silicon.
# If you are on an M1/M2/M3 Mac, please use the `docker/cpu` setup.
2025-04-04 16:49:10 -06:00
# MPS (Apple's GPU acceleration) support is planned but not yet available.
2025-03-10 00:20:42 +03:00
2025-02-06 01:22:21 -07:00
# Models will auto-download, but if needed you can manually download:
python docker/scripts/download_model.py --output api/src/models/v1_0
# Or run directly via UV:
./start-gpu.sh # For GPU support
./start-cpu.sh # For CPU support
2025-01-30 05:47:28 -07:00
```
< / details >
2025-01-30 23:06:55 -07:00
< details >
2025-01-30 05:47:28 -07:00
< summary > Direct Run (via uv) < / summary >
1. Install prerequisites ():
- Install [astral-uv ](https://docs.astral.sh/uv/ )
2025-02-08 01:49:06 -07:00
- Install [espeak-ng ](https://github.com/espeak-ng/espeak-ng ) in your system if you want it available as a fallback for unknown words/sounds. The upstream libraries may attempt to handle this, but results have varied.
2025-01-30 05:47:28 -07:00
- Clone the repository:
```bash
git clone https://github.com/remsky/Kokoro-FastAPI.git
cd Kokoro-FastAPI
```
2025-02-08 01:51:41 -07:00
Run the [model download script ](https://github.com/remsky/Kokoro-FastAPI/blob/master/docker/scripts/download_model.py ) if you haven't already
2025-01-30 05:47:28 -07:00
Start directly via UV (with hot-reload)
2025-03-15 02:19:25 +13:00
Linux and macOS
2025-01-30 05:47:28 -07:00
```bash
./start-cpu.sh OR
./start-gpu.sh
```
2025-03-15 02:19:25 +13:00
Windows
```powershell
.\start-cpu.ps1 OR
.\start-gpu.ps1
```
2025-01-31 02:29:24 -07:00
< / details >
< details open >
< summary > Up and Running? < / summary >
Run locally as an OpenAI-Compatible Speech Endpoint
2025-01-30 05:47:28 -07:00
2025-01-31 02:29:24 -07:00
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8880/v1", api_key="not-needed"
)
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice="af_sky+af_bella", #single or multiple voicepack combo
input="Hello world!"
) as response:
response.stream_to_file("output.mp3")
```
- The API will be available at http://localhost:8880
- API Documentation: http://localhost:8880/docs
- Web Interface: http://localhost:8880/web
< div align = "center" style = "display: flex; justify-content: center; gap: 10px;" >
2025-02-07 02:38:10 -07:00
< img src = "assets/docs-screenshot.png" width = "42%" alt = "API Documentation" style = "border: 2px solid #333 ; padding: 10px;" >
< img src = "assets/webui-screenshot.png" width = "42%" alt = "Web UI Screenshot" style = "border: 2px solid #333 ; padding: 10px;" >
2025-01-31 02:29:24 -07:00
< / div >
2025-01-30 05:47:28 -07:00
< / details >
2025-01-02 15:36:53 -07:00
## Features
2025-04-04 16:49:10 -06:00
2025-01-01 21:50:00 -07:00
< details >
2025-01-02 15:36:53 -07:00
< summary > OpenAI-Compatible Speech Endpoint< / summary >
2024-12-30 04:53:29 -07:00
2024-12-31 03:41:45 -07:00
```python
2024-12-31 18:55:26 -07:00
# Using OpenAI's Python library
from openai import OpenAI
2025-01-08 10:30:35 -05:00
client = OpenAI(base_url="http://localhost:8880/v1", api_key="not-needed")
2024-12-31 18:55:26 -07:00
response = client.audio.speech.create(
2025-01-23 04:11:31 -07:00
model="kokoro",
voice="af_bella+af_sky", # see /api/src/core/openai_mappings.json to customize
2024-12-31 18:55:26 -07:00
input="Hello world!",
response_format="mp3"
)
2024-12-30 04:53:29 -07:00
2024-12-31 18:55:26 -07:00
response.stream_to_file("output.mp3")
2024-12-31 03:41:45 -07:00
```
2024-12-31 18:55:26 -07:00
Or Via Requests:
2024-12-31 03:41:45 -07:00
```python
import requests
2024-12-31 19:04:40 -07:00
response = requests.get("http://localhost:8880/v1/audio/voices")
2024-12-31 18:55:26 -07:00
voices = response.json()["voices"]
# Generate audio
2024-12-31 03:41:45 -07:00
response = requests.post(
2024-12-31 14:49:43 -07:00
"http://localhost:8880/v1/audio/speech",
2024-12-31 03:41:45 -07:00
json={
2025-01-23 04:11:31 -07:00
"model": "kokoro",
2024-12-31 03:41:45 -07:00
"input": "Hello world!",
"voice": "af_bella",
2024-12-31 03:46:31 -07:00
"response_format": "mp3", # Supported: mp3, wav, opus, flac
2024-12-31 03:41:45 -07:00
"speed": 1.0
}
)
# Save audio
with open("output.mp3", "wb") as f:
f.write(response.content)
```
2024-12-30 04:53:29 -07:00
2025-01-01 21:50:00 -07:00
Quick tests (run from another terminal):
```bash
2025-01-08 10:30:35 -05:00
python examples/assorted_checks/test_openai/test_openai_tts.py # Test OpenAI Compatibility
python examples/assorted_checks/test_voices/test_all_voices.py # Test all available voices
2025-01-01 21:50:00 -07:00
```
< / details >
< details >
2025-01-02 15:36:53 -07:00
< summary > Voice Combination< / summary >
2025-02-06 01:22:21 -07:00
- Weighted voice combinations using ratios (e.g., "af_bella(2)+af_heart(1)" for 67%/33% mix)
- Ratios are automatically normalized to sum to 100%
- Available through any endpoint by adding weights in parentheses
2025-01-02 15:36:53 -07:00
- Saves generated voicepacks for future use
2024-12-30 04:17:50 -07:00
2024-12-31 18:55:26 -07:00
Combine voices and generate audio:
```python
import requests
2025-01-01 21:50:00 -07:00
response = requests.get("http://localhost:8880/v1/audio/voices")
voices = response.json()["voices"]
2024-12-30 04:17:50 -07:00
2025-02-06 01:22:21 -07:00
# Example 1: Simple voice combination (50%/50% mix)
response = requests.post(
"http://localhost:8880/v1/audio/speech",
json={
"input": "Hello world!",
"voice": "af_bella+af_sky", # Equal weights
"response_format": "mp3"
}
)
# Example 2: Weighted voice combination (67%/33% mix)
response = requests.post(
"http://localhost:8880/v1/audio/speech",
json={
"input": "Hello world!",
"voice": "af_bella(2)+af_sky(1)", # 2:1 ratio = 67%/33%
"response_format": "mp3"
}
)
# Example 3: Download combined voice as .pt file
2024-12-31 18:55:26 -07:00
response = requests.post(
"http://localhost:8880/v1/audio/voices/combine",
2025-02-06 01:22:21 -07:00
json="af_bella(2)+af_sky(1)" # 2:1 ratio = 67%/33%
2024-12-31 03:41:45 -07:00
)
2024-12-30 04:17:50 -07:00
2025-02-06 01:22:21 -07:00
# Save the .pt file
with open("combined_voice.pt", "wb") as f:
f.write(response.content)
# Use the downloaded voice file
2024-12-31 18:55:26 -07:00
response = requests.post(
"http://localhost:8880/v1/audio/speech",
json={
"input": "Hello world!",
2025-02-06 01:22:21 -07:00
"voice": "combined_voice", # Use the saved voice file
2024-12-31 18:55:26 -07:00
"response_format": "mp3"
}
)
2025-02-06 01:22:21 -07:00
2024-12-30 04:17:50 -07:00
```
2025-01-01 21:50:00 -07:00
< p align = "center" >
2025-01-06 03:32:41 -07:00
< img src = "assets/voice_analysis.png" width = "80%" alt = "Voice Analysis Comparison" style = "border: 2px solid #333 ; padding: 10px;" >
2025-01-01 21:50:00 -07:00
< / p >
< / details >
< details >
2025-01-02 15:36:53 -07:00
< summary > Multiple Output Audio Formats< / summary >
- mp3
- wav
- opus
- flac
2025-02-19 23:10:51 -05:00
- m4a
2025-01-02 15:36:53 -07:00
- pcm
< p align = "center" >
2025-01-06 03:32:41 -07:00
< img src = "assets/format_comparison.png" width = "80%" alt = "Audio Format Comparison" style = "border: 2px solid #333 ; padding: 10px;" >
2025-01-02 15:36:53 -07:00
< / p >
< / details >
2025-01-06 03:49:31 -07:00
< details >
< summary > Streaming Support< / summary >
```python
# OpenAI-compatible streaming
from openai import OpenAI
client = OpenAI(
2025-01-30 05:47:28 -07:00
base_url="http://localhost:8880/v1", api_key="not-needed")
2025-01-06 03:49:31 -07:00
# Stream to file
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice="af_bella",
input="Hello world!"
) as response:
response.stream_to_file("output.mp3")
# Stream to speakers (requires PyAudio)
import pyaudio
player = pyaudio.PyAudio().open(
format=pyaudio.paInt16,
channels=1,
rate=24000,
output=True
)
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice="af_bella",
response_format="pcm",
input="Hello world!"
) as response:
for chunk in response.iter_bytes(chunk_size=1024):
player.write(chunk)
```
Or via requests:
```python
import requests
response = requests.post(
"http://localhost:8880/v1/audio/speech",
json={
"input": "Hello world!",
"voice": "af_bella",
"response_format": "pcm"
},
stream=True
)
for chunk in response.iter_content(chunk_size=1024):
if chunk:
# Process streaming chunks
pass
```
< p align = "center" >
< img src = "assets/gpu_first_token_timeline_openai.png" width = "45%" alt = "GPU First Token Timeline" style = "border: 2px solid #333 ; padding: 10px; margin-right: 1%;" >
< img src = "assets/cpu_first_token_timeline_stream_openai.png" width = "45%" alt = "CPU First Token Timeline" style = "border: 2px solid #333 ; padding: 10px;" >
< / p >
Key Streaming Metrics:
- First token latency @ chunksize
2025-01-08 04:40:43 -07:00
- ~300ms (GPU) @ 400
- ~3500ms (CPU) @ 200 (older i7)
- ~< 1s ( CPU ) @ 200 ( M3 Pro )
2025-01-06 03:49:31 -07:00
- Adjustable chunking settings for real-time playback
*Note: Artifacts in intonation can increase with smaller chunks*
< / details >
2025-01-02 15:36:53 -07:00
## Processing Details
2025-01-01 21:50:00 -07:00
< details >
2025-01-02 15:36:53 -07:00
< summary > Performance Benchmarks< / summary >
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
Benchmarking was performed on generation via the local API using text lengths up to feature-length books (~1.5 hours output), measuring processing time and realtime factor. Tests were run on:
- Windows 11 Home w/ WSL2
- NVIDIA 4060Ti 16gb GPU @ CUDA 12.1
- 11th Gen i7-11700 @ 2.5GHz
- 64gb RAM
2024-12-31 03:46:31 -07:00
- WAV native output
2024-12-31 03:41:45 -07:00
- H.G. Wells - The Time Machine (full text)
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
< p align = "center" >
2025-01-06 03:32:41 -07:00
< img src = "assets/gpu_processing_time.png" width = "45%" alt = "Processing Time" style = "border: 2px solid #333 ; padding: 10px; margin-right: 1%;" >
< img src = "assets/gpu_realtime_factor.png" width = "45%" alt = "Realtime Factor" style = "border: 2px solid #333 ; padding: 10px;" >
2024-12-31 03:41:45 -07:00
< / p >
2024-12-30 04:17:50 -07:00
2024-12-31 03:41:45 -07:00
Key Performance Metrics:
2025-01-30 05:47:28 -07:00
- Realtime Speed: Ranges between 35x-100x (generation time to output audio length)
2024-12-31 05:56:22 -07:00
- Average Processing Rate: 137.67 tokens/second (cl100k_base)
2025-01-01 21:50:00 -07:00
< / details >
< details >
2025-01-02 15:36:53 -07:00
< summary > GPU Vs. CPU< / summary >
2024-12-30 04:17:50 -07:00
2025-01-01 21:50:00 -07:00
```bash
2025-02-14 14:36:17 -05:00
# GPU: Requires NVIDIA GPU with CUDA 12.8 support (~35x-100x realtime speed)
2025-02-06 01:22:21 -07:00
cd docker/gpu
docker compose up --build
# CPU: PyTorch CPU inference
cd docker/cpu
2025-01-01 21:50:00 -07:00
docker compose up --build
```
2025-02-12 02:11:08 +09:00
*Note: Overall speed may have reduced somewhat with the structural changes to accommodate streaming. Looking into it*
2025-01-02 15:36:53 -07:00
< / details >
2025-01-01 21:50:00 -07:00
2025-01-02 15:36:53 -07:00
< details >
< summary > Natural Boundary Detection< / summary >
2024-12-31 04:29:48 -07:00
2025-01-02 15:36:53 -07:00
- Automatically splits and stitches at sentence boundaries
2025-02-07 02:38:10 -07:00
- Helps to reduce artifacts and allow long form processing as the base model is only currently configured for approximately 30s output
The model is capable of processing up to a 510 phonemized token chunk at a time, however, this can often lead to 'rushed' speech or other artifacts. An additional layer of chunking is applied in the server, that creates flexible chunks with a `TARGET_MIN_TOKENS` , `TARGET_MAX_TOKENS` , and `ABSOLUTE_MAX_TOKENS` which are configurable via environment variables, and set to 175, 250, 450 by default
2025-01-01 21:50:00 -07:00
< / details >
2024-12-30 04:53:29 -07:00
2025-02-06 01:22:21 -07:00
< details >
< summary > Timestamped Captions & Phonemes< / summary >
2025-02-14 14:29:47 -05:00
Generate audio with word-level timestamps without streaming:
2025-02-06 01:22:21 -07:00
```python
import requests
2025-02-14 14:29:47 -05:00
import base64
2025-02-06 01:22:21 -07:00
import json
response = requests.post(
"http://localhost:8880/dev/captioned_speech",
json={
"model": "kokoro",
"input": "Hello world!",
"voice": "af_bella",
"speed": 1.0,
2025-02-14 14:29:47 -05:00
"response_format": "mp3",
"stream": False,
},
stream=False
2025-02-06 01:22:21 -07:00
)
2025-02-14 14:29:47 -05:00
with open("output.mp3","wb") as f:
2025-02-06 01:22:21 -07:00
2025-02-14 14:29:47 -05:00
audio_json=json.loads(response.content)
# Decode base 64 stream to bytes
chunk_audio=base64.b64decode(audio_json["audio"].encode("utf-8"))
# Process streaming chunks
f.write(chunk_audio)
# Print word level timestamps
print(audio_json["timestamps"])
```
Generate audio with word-level timestamps with streaming:
```python
import requests
import base64
import json
response = requests.post(
"http://localhost:8880/dev/captioned_speech",
json={
"model": "kokoro",
"input": "Hello world!",
"voice": "af_bella",
"speed": 1.0,
"response_format": "mp3",
"stream": True,
},
stream=True
)
f=open("output.mp3","wb")
for chunk in response.iter_lines(decode_unicode=True):
if chunk:
chunk_json=json.loads(chunk)
# Decode base 64 stream to bytes
chunk_audio=base64.b64decode(chunk_json["audio"].encode("utf-8"))
# Process streaming chunks
f.write(chunk_audio)
# Print word level timestamps
print(chunk_json["timestamps"])
2025-02-06 01:22:21 -07:00
```
< / details >
2025-01-09 07:20:14 -07:00
< details >
< summary > Phoneme & Token Routes< / summary >
Convert text to phonemes and/or generate audio directly from phonemes:
```python
import requests
2025-01-30 05:47:28 -07:00
def get_phonemes(text: str, language: str = "a"):
"""Get phonemes and tokens for input text"""
response = requests.post(
"http://localhost:8880/dev/phonemize",
json={"text": text, "language": language} # "a" for American English
)
response.raise_for_status()
result = response.json()
return result["phonemes"], result["tokens"]
def generate_audio_from_phonemes(phonemes: str, voice: str = "af_bella"):
"""Generate audio from phonemes"""
response = requests.post(
"http://localhost:8880/dev/generate_from_phonemes",
json={"phonemes": phonemes, "voice": voice},
headers={"Accept": "audio/wav"}
)
if response.status_code != 200:
print(f"Error: {response.text}")
return None
return response.content
# Example usage
text = "Hello world!"
try:
# Convert text to phonemes
phonemes, tokens = get_phonemes(text)
print(f"Phonemes: {phonemes}") # e.g. ðɪ s ɪ z ˈ oʊnli ɐ tˈ ɛst
print(f"Tokens: {tokens}") # Token IDs including start/end tokens
# Generate and save audio
if audio_bytes := generate_audio_from_phonemes(phonemes):
with open("speech.wav", "wb") as f:
f.write(audio_bytes)
print(f"Generated {len(audio_bytes)} bytes of audio")
except Exception as e:
print(f"Error: {e}")
2025-01-09 07:20:14 -07:00
```
See `examples/phoneme_examples/generate_phonemes.py` for a sample script.
< / details >
2025-01-30 05:47:28 -07:00
< details >
< summary > Debug Endpoints< / summary >
Monitor system state and resource usage with these endpoints:
- `/debug/threads` - Get thread information and stack traces
- `/debug/storage` - Monitor temp file and output directory usage
- `/debug/system` - Get system information (CPU, memory, GPU)
- `/debug/session_pools` - View ONNX session and CUDA stream status
Useful for debugging resource exhaustion or performance issues.
< / details >
2025-01-09 07:20:14 -07:00
2025-03-21 18:03:09 +00:00
## Known Issues & Troubleshooting
< details >
< summary > Missing words & Missing some timestamps< / summary >
The api will automaticly do text normalization on input text which may incorrectly remove or change some phrases. This can be disabled by adding `"normalization_options":{"normalize": false}` to your request json:
```python
import requests
response = requests.post(
"http://localhost:8880/v1/audio/speech",
json={
"input": "Hello world!",
"voice": "af_heart",
"response_format": "pcm",
"normalization_options":
{
"normalize": False
}
},
stream=True
)
for chunk in response.iter_content(chunk_size=1024):
if chunk:
# Process streaming chunks
pass
```
< / details >
2025-01-13 05:51:47 -07:00
2025-01-14 03:47:25 -07:00
< details >
< summary > Versioning & Development< / summary >
2025-04-04 19:30:13 -06:00
**Branching Strategy:**
* ** `release` branch:** Contains the latest stable build, recommended for production use. Docker images tagged with specific versions (e.g., `v0.3.0` ) are built from this branch.
* ** `master` branch:** Used for active development. It may contain experimental features, ongoing changes, or fixes not yet in a stable release. Use this branch if you want the absolute latest code, but be aware it might be less stable. The `latest` Docker tag often points to builds from this branch.
2025-01-14 03:47:25 -07:00
2025-04-04 19:30:13 -06:00
Note: This is a *development* focused project at its core.
2025-01-14 03:47:25 -07:00
2025-04-04 19:30:13 -06:00
If you run into trouble, you may have to roll back a version on the release tags if something comes up, or build up from source and/or troubleshoot + submit a PR.
Free and open source is a community effort, and there's only really so many hours in a day. If you'd like to support the work, feel free to open a PR, buy me a coffee, or report any bugs/features/etc you find during use.
2025-01-14 03:47:25 -07:00
< a href = "https://www.buymeacoffee.com/remsky" target = "_blank" >
< img
src="https://cdn.buymeacoffee.com/buttons/v2/default-violet.png"
alt="Buy Me A Coffee"
style="height: 30px !important;width: 110px !important;"
>
< / a >
< / details >
2025-01-13 05:51:47 -07:00
< details >
< summary > Linux GPU Permissions< / summary >
Some Linux users may encounter GPU permission issues when running as non-root.
Can't guarantee anything, but here are some common solutions, consider your security requirements carefully
### Option 1: Container Groups (Likely the best option)
```yaml
services:
kokoro-tts:
# ... existing config ...
group_add:
- "video"
- "render"
```
### Option 2: Host System Groups
```yaml
services:
kokoro-tts:
# ... existing config ...
user: "${UID}:${GID}"
group_add:
- "video"
```
Note: May require adding host user to groups: `sudo usermod -aG docker,video $USER` and system restart.
### Option 3: Device Permissions (Use with caution)
```yaml
services:
kokoro-tts:
# ... existing config ...
devices:
- /dev/nvidia0:/dev/nvidia0
- /dev/nvidiactl:/dev/nvidiactl
- /dev/nvidia-uvm:/dev/nvidia-uvm
```
⚠️ Warning: Reduces system security. Use only in development environments.
Prerequisites: NVIDIA GPU, drivers, and container toolkit must be properly configured.
Visit [NVIDIA Container Toolkit installation ](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html ) for more detailed information
< / details >
2025-01-02 15:36:53 -07:00
## Model and License
2025-01-01 21:50:00 -07:00
< details open >
2025-01-02 15:36:53 -07:00
< summary > Model< / summary >
2024-12-30 04:53:29 -07:00
This API uses the [Kokoro-82M ](https://huggingface.co/hexgrad/Kokoro-82M ) model from HuggingFace.
Visit the model page for more details about training, architecture, and capabilities. I have no affiliation with any of their work, and produced this wrapper for ease of use and personal projects.
2025-01-01 21:50:00 -07:00
< / details >
< details >
2025-01-02 15:36:53 -07:00
< summary > License< / summary >
2024-12-30 04:53:29 -07:00
This project is licensed under the Apache License 2.0 - see below for details:
- The Kokoro model weights are licensed under Apache 2.0 (see [model page ](https://huggingface.co/hexgrad/Kokoro-82M ))
- The FastAPI wrapper code in this repository is licensed under Apache 2.0 to match
- The inference code adapted from StyleTTS2 is MIT licensed
The full Apache 2.0 license text can be found at: https://www.apache.org/licenses/LICENSE-2.0
2025-01-01 21:50:00 -07:00
< / details >