Replies: 14 comments
-
What do you want to work on? |
Beta Was this translation helpful? Give feedback.
-
what is the differece between them? |
Beta Was this translation helpful? Give feedback.
-
I sort of want to do what the web version does |
Beta Was this translation helpful? Give feedback.
-
The docs detail how you can setup and start backend on your local PC Create a venv Check documentation in your local machine at localhost:8080/docs |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
A small remark: unless you have a very powerful GPU you will be only able to run the website locally, but not the chat (=Inference). |
Beta Was this translation helpful? Give feedback.
-
Backend is to work on the open assistant web backend |
Beta Was this translation helpful? Give feedback.
-
how many VRM do I need? |
Beta Was this translation helpful? Give feedback.
-
what is LLM? |
Beta Was this translation helpful? Give feedback.
-
The chat itself |
Beta Was this translation helpful? Give feedback.
-
how many VRM do I need to run? |
Beta Was this translation helpful? Give feedback.
-
For the 70b model you'll need 48GB VRAM. If you want to run models locally I recommend https://gpt4all.io/ |
Beta Was this translation helpful? Give feedback.
-
is it censored or uncensored? |
Beta Was this translation helpful? Give feedback.
-
Can it be just 1 file |
Beta Was this translation helpful? Give feedback.
-
I have no idea on what should I do to get the local version up and running after install it on my pc
Beta Was this translation helpful? Give feedback.
All reactions