The UI of Chat is becoming increasingly complex, often encompassing an entire front-end project along with deployment solutions.
This repository aims to construct the entire front-end UI using a single HTML file, aiming for a minimalist approach to create a chatbot.
By simplifying the structure and key functions, developers can quickly set up and experiment with a functional chatbot, adhering to a slimmed-down project design philosophy.
-
Supports OpenAI-format requests, enabling compatibility with various backends such as
HuggingFace Text Generation Inference (TGI)
,vLLM
, etc. -
Automatically supports multiple response formats without additional configuration, including standard
OpenAI
response formats,Cloudflare AI
response formats, andplain text
responses -
Support various
backend endpoints through custom configurations
, providing any project with a universal frontend chatbot -
Support the
download of chat history
, interrupt the current generation, and repeat the previous generation to quickly test the backend inference capabilities -
Support
MCP (Model Context Protocol)
by acting as a renderer and facilitating community interactions with the backend main process via IPC -
Inquiries with image inputs can be made using
multimodal vision models
-
Support for toggling between
original format
andMarkdown format
display -
Support internationalization and localization
i18n
Option 1: Chat with demo AIQL
The demo will use
Llama-3.3-70B-Instruct
by default
Multimodal image upload is only supported for vision models
MCP tools call necessitates a desktop backend and LLM support in OpenAI format, referencing Chat-MCP
Option 2: Download Index and open it locally (recommended)
Option 3: Download Index and deploy it by python
cd /path/to/your/directory
python3 -m http.server 8000
Then, open your browser and access
http://localhost:8000
Option 4: fork this repo and link it to Cloudflare pages
Demo: https://www2.aiql.com
Option 5: Deploy your own Chatbot by Docker
docker run -p 8080:8080 -d aiql/chat-ui
Option 6: Deploy within Huggingface
Don't forget add
app_port: 8080
inREADME.md
Option 7: Deploy within K8s
Demo: Chat-MCP
By default, the Chatbot will use API format as OpenAI ChatGPT.
You can insert your OpenAI API Key
and change the Endpoint
in configuration to use API from any other vendors
You can also download the config template from example and insert your API Key
, then use it for quick configuration
If you're experiencing issues opening the page and a simple refresh isn't resolving the issue, please take the following steps:
- Click
Refresh
icon on the upper right ofInterface Configuration
- Click hidden botton on the right side of the index page
- Click
Reset All Config
icon
- Right-click your browser page and go to the
Network
section. - Right-click on section table and clear your browser's cache and cookies to ensure you have the latest version of the page.
- Additionally, inspect the browser's Network section to see which resources are failing to load due to your location. This will provide you with more specific information about the issue.
-
Introduce the image as sidecar container
spec: template: metadata: labels: app: my-app spec: containers: - name: chat-ui image: aiql/chat-ui ports: - containerPort: 8080
-
Add service
apiVersion: v1 kind: Service metadata: name: chat-ui-service spec: selector: app: my-app ports: - protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer
-
You can access the port or add other ingress
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: chat-ui.example.com http: paths: - path: / pathType: Prefix backend: service: name: chat-ui-service port: number: 8080