You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can run on server a CPU LLM with llama.cpp
and may with oobabooga one or two models on the GPUs each on different openai compatibility ports
like :5000 , :5001, :5002.
May it would be possible to ad some API Bots in Configuration,today i use the ChatGPT Bots with API Configuration and I change it to a local API. This can also be running GPT4ALL localy.
I can run on server a CPU LLM with llama.cpp
and may with oobabooga one or two models on the GPUs each on different openai compatibility ports
like :5000 , :5001, :5002.
May it would be possible to ad some API Bots in Configuration,today i use the ChatGPT Bots with API Configuration and I change it to a local API. This can also be running GPT4ALL localy.
More API Bots 👍
BOT-A http://Local-IP:Port/v1
BOT-B http://Local-IP:Port/v1
BOT-C http://Local-IP:Port/v1
The text was updated successfully, but these errors were encountered: