mirror of
https://github.com/remsky/Kokoro-FastAPI.git
synced 2025-04-13 09:39:17 +00:00
Merge branch 'master' of https://github.com/remsky/Kokoro-FastAPI
This commit is contained in:
commit
e1dc8e5abc
1 changed files with 50 additions and 2 deletions
52
README.md
52
README.md
|
@ -5,7 +5,7 @@
|
|||
# Kokoro TTS API
|
||||
[]()
|
||||
[]()
|
||||
[](https://huggingface.co/hexgrad/Kokoro-82M/tree/c3b0d86e2a980e027ef71c28819ea02e351c2667) [](https://huggingface.co/spaces/Remsky/Kokoro-TTS-Zero)
|
||||
[](https://huggingface.co/hexgrad/Kokoro-82M/tree/c3b0d86e2a980e027ef71c28819ea02e351c2667) [](https://huggingface.co/spaces/Remsky/Kokoro-TTS-Zero) [](https://www.buymeacoffee.com/remsky)
|
||||
|
||||
Dockerized FastAPI wrapper for [Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M) text-to-speech model
|
||||
- OpenAI-compatible Speech endpoint, with inline voice combination functionality
|
||||
|
@ -29,7 +29,8 @@ The service can be accessed through either the API endpoints or the Gradio web i
|
|||
```bash
|
||||
git clone https://github.com/remsky/Kokoro-FastAPI.git
|
||||
cd Kokoro-FastAPI
|
||||
docker compose up --build
|
||||
docker compose up --build # for GPU
|
||||
#docker compose -f docker-compose.cpu.yml up --build # for CPU
|
||||
```
|
||||
2. Run locally as an OpenAI-Compatible Speech Endpoint
|
||||
```python
|
||||
|
@ -317,6 +318,53 @@ with open("speech.wav", "wb") as f:
|
|||
See `examples/phoneme_examples/generate_phonemes.py` for a sample script.
|
||||
</details>
|
||||
|
||||
## Known Issues
|
||||
|
||||
<details>
|
||||
<summary>Linux GPU Permissions</summary>
|
||||
|
||||
Some Linux users may encounter GPU permission issues when running as non-root.
|
||||
Can't guarantee anything, but here are some common solutions, consider your security requirements carefully
|
||||
|
||||
### Option 1: Container Groups (Likely the best option)
|
||||
```yaml
|
||||
services:
|
||||
kokoro-tts:
|
||||
# ... existing config ...
|
||||
group_add:
|
||||
- "video"
|
||||
- "render"
|
||||
```
|
||||
|
||||
### Option 2: Host System Groups
|
||||
```yaml
|
||||
services:
|
||||
kokoro-tts:
|
||||
# ... existing config ...
|
||||
user: "${UID}:${GID}"
|
||||
group_add:
|
||||
- "video"
|
||||
```
|
||||
Note: May require adding host user to groups: `sudo usermod -aG docker,video $USER` and system restart.
|
||||
|
||||
### Option 3: Device Permissions (Use with caution)
|
||||
```yaml
|
||||
services:
|
||||
kokoro-tts:
|
||||
# ... existing config ...
|
||||
devices:
|
||||
- /dev/nvidia0:/dev/nvidia0
|
||||
- /dev/nvidiactl:/dev/nvidiactl
|
||||
- /dev/nvidia-uvm:/dev/nvidia-uvm
|
||||
```
|
||||
⚠️ Warning: Reduces system security. Use only in development environments.
|
||||
|
||||
Prerequisites: NVIDIA GPU, drivers, and container toolkit must be properly configured.
|
||||
|
||||
Visit [NVIDIA Container Toolkit installation](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) for more detailed information
|
||||
|
||||
</details>
|
||||
|
||||
## Model and License
|
||||
|
||||
<details open>
|
||||
|
|
Loading…
Add table
Reference in a new issue