Features:
- WebSocket chatbot server
- Docker-friendly
- Sample web and CLI clients provided
- Pluggable LLM backends (currently OpenAI API with ChatGPT or GPT-4)
- Out-of-the-box support for ChatGPT plugins
- ChatGPT plugin auth methods supported:
none
,user_http
,service_http
- Request validation for the plugins according to their OpenAPI specifications
- Prompt customization for fine-grained instructions.
- Install Python 3.10, if not already installed.
- Clone this repository.
- Navigate to the cloned repository directory:
cd horace
- Create a new virtual environment:
python3 -m venv ./venv
- Activate the virtual environment:
source ./venv/bin/activate
- Install project requirements:
pip3 install -r requirements.txt
- Navigate to the
app
directory:cd app
- Start the server:
OPENAI_API_KEY=openai-api-key python3 main.py
(replaceopenai-api-key
with your OpenAI API key - get it here) - Launch a client:
- For the CLI client, run this in another terminal window:
- Clone this repository.
- Navigate to the cloned repository directory:
cd horace
- Build the Docker image:
docker build -t horace:latest .
- Start the server:
docker run --rm \
-e OPENAI_API_KEY=openai-api-key \
-p 8001:8001 \
--name horace \
horace:latest
- Launch a client:
- For the CLI client, run this in another terminal window:
docker exec -it horace python3 /app/horace-cli.py
- For the web client, double-click
client-demo/index.html
in Explorer/Finder etc. to open it in your browser.
Tests have not yet been updated since forking from GRACE. To be fixed.