Minimal guide to quickly set up a Large Language Model (LLM) interface similar to ChatGPT on your local machine.
These are all the sites you won't have to go to because everything is going to work perfectly but if it doesn't, go to the original repositories/websites (as this is all taken from there):
Ollama provides an environment for running LLMs.
Open-WebUI is an open source ChatGPT-like web interface that can be used with LLMs like Llama.
This takes just a few minutes thanks to the power of open-source, docker and really clever people!
Note: Obviously, you should already have docker installed.
You can get this done with a single command that installs a just one container with Ollama and Open WebUI
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
But I like the separate containers better, and this is also for my own keeping, so 😉
docker pull ollama/ollama
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
- Go to: http://localhost:11434/
You should see a message saying, "Ollama is running."
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- Go to: http://localhost:8080/
You will have to make an account (this is stored locally so you don't even really need to input a real email).
The ollama container comes with no models. Start it up with llama3 so that it downloads it:
docker exec -it ollama ollama run llama3
Then just do /bye
and go back to the browser.
Thats no fun. If navigating to http://localhost:8080/ on your browser throws a network error, try the following:
docker stop open-webui
docker rm open-webui
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
After creating the containers with docker run
they will be started automatically and run in the background. You can stop the containers anytime usng docker stop
and the name we passed with --name ollama
:
docker stop ollama
docker stop open-webui
And start them again with:
docker start ollama
docker start open-webui
Know that you can open a terminal within the Ollama container and Run Llama:
docker exec -it ollama ollama run llama3
If you phone is on the same network as the computer running the Open WebUI, you can access it from your phone browser:
- Go to:
http://<OpenWebUI_ip_adress>:8080
You can also Disable Authentication in the Open-webUI interface by adding the following flag to the docker run
command for open-webui:
-e WEBUI_AUTH=False
Warning: The same chats and the Open-webUI installation settings will be accessible to everyone who can reach your host