You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I really like LocalAI. However, I encountered some confusion while using it. LocalAI is composed of different backends, and different backends have different parameters and even usage methods. When I want to customize the settings further (for example, llama.cpp has richer parameters), how should I set it up?Also, exllama's debug messages don't seem to appear.
My idea is to understand how the parameters in the config are converted into backend commands, and then they can be set according to the documentation of llama.cpp, vllm, and exllama.
Therefore, I think I need to further understand the structure of LocalAI, but I am a newbie. Can anyone give me a suitable starting point?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello, I really like LocalAI. However, I encountered some confusion while using it. LocalAI is composed of different backends, and different backends have different parameters and even usage methods. When I want to customize the settings further (for example, llama.cpp has richer parameters), how should I set it up?Also, exllama's debug messages don't seem to appear.
My idea is to understand how the parameters in the config are converted into backend commands, and then they can be set according to the documentation of llama.cpp, vllm, and exllama.
Therefore, I think I need to further understand the structure of LocalAI, but I am a newbie. Can anyone give me a suitable starting point?
Beta Was this translation helpful? Give feedback.
All reactions