diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 3e19a99..5f9dc4f 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -29,4 +29,4 @@ jobs: - name: Run Tests run: | - uv run pytest api/tests/ ui/tests/ --asyncio-mode=auto --cov=api --cov=ui --cov-report=term-missing + uv run pytest api/tests/ --asyncio-mode=auto --cov=api --cov-report=term-missing diff --git a/README.md b/README.md index fddea39..fab5be9 100644 --- a/README.md +++ b/README.md @@ -25,13 +25,28 @@ The service can be accessed through either the API endpoints or the Gradio web i 1. Install prerequisites: - Install [Docker Desktop](https://www.docker.com/products/docker-desktop/) + [Git](https://git-scm.com/downloads) - - Clone and start the service: + - Clone the repository: ```bash git clone https://github.com/remsky/Kokoro-FastAPI.git cd Kokoro-FastAPI + ``` + +2. Start the service: + + a. Using Docker Compose (recommended for full setup including UI): + ```bash docker compose up --build # for GPU #docker compose -f docker-compose.cpu.yml up --build # for CPU ``` + + b. Running the API alone using Docker: + ```bash + # For CPU version + docker run -p 8880:8880 kokoro-fastapi-cpu + + # For GPU version (requires NVIDIA Container Toolkit) + docker run --gpus all -p 8880:8880 kokoro-fastapi-gpu + ``` 2. Run locally as an OpenAI-Compatible Speech Endpoint ```python from openai import OpenAI diff --git a/docker/cpu/docker-compose.yml b/docker/cpu/docker-compose.yml index f94fbc3..beb2a8f 100644 --- a/docker/cpu/docker-compose.yml +++ b/docker/cpu/docker-compose.yml @@ -1,3 +1,4 @@ +name: kokoro-tts services: kokoro-tts: # image: ghcr.io/remsky/kokoro-fastapi-cpu:latest diff --git a/docker/gpu/docker-compose.yml b/docker/gpu/docker-compose.yml index 61852cd..c600537 100644 --- a/docker/gpu/docker-compose.yml +++ b/docker/gpu/docker-compose.yml @@ -1,3 +1,4 @@ +name: kokoro-tts services: kokoro-tts: build: