title | emoji | colorFrom | colorTo | sdk | pinned | license | base_path | app_port |
---|---|---|---|---|---|---|---|---|
chat-ui |
🔥 |
purple |
purple |
docker |
false |
apache-2.0 |
/chat |
3000 |
A chat interface using open source models, eg OpenAssistant. It is a SvelteKit app and it powers the HuggingChat app on hf.co/chat.
npm install
npm run dev
Default configuration is in .env
. Put custom config and secrets in .env.local
, it will override the values in .env
.
Check out .env to see what needs to be set.
Basically you need to create a .env.local
with the following contents:
MONGODB_URL=<url to mongo, for example a free MongoDB Atlas sandbox instance>
HF_ACCESS_TOKEN=<your HF access token from https://huggingface.co/settings/tokens>
Create a DOTENV_LOCAL
secret to your space with the following contents:
MONGODB_URL=<url to mongo, for example a free MongoDB Atlas sandbox instance>
HF_ACCESS_TOKEN=<your HF access token from https://huggingface.co/settings/tokens>
Where the contents in <...>
are replaced by the MongoDB URL and your HF Access Token.
Both the example above use the HF Inference API or HF Endpoints API.
If you want to run the model locally, you need to run this inference server locally: https://github.com/huggingface/text-generation-inference
And add this to your .env.local
:
MODELS=`[{"name": "...", "endpoints": [{"url": "127.0.0.1:8080/generate_stream"}]}]`
To create a production version of your app:
npm run build
You can preview the production build with npm run preview
.
To deploy your app, you may need to install an adapter for your target environment.