-
Is it possible to run the WrenAI to be running along with the llama3 on the self hosted environment ? if yes from where we can find the documentation on the same as was unable to find anything relevant in this regards. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
@hbanand Thanks for reaching out! Please check out this for further details: #277 (comment) In brief, there might be issues if you use Ollama's models, since LLM output might not always conform to JSON output. I've already added Ollama support in this branch: https://github.com/Canner/WrenAI/tree/feature/ai-service/add-ollama. Welcome to check it out! Also I think we should customize prompts depending on the model we choose. There is still some space for improvement. I would like to hear your thoughts anyway. For JSON output broken issue, I am thinking to add a component to help solve this issue: https://github.com/noamgat/lm-format-enforcer. I would also like to know more your thoughts here. In order to try to run Ollama models, please see the demo section of the README in wren-ai-service. You don't need to run Feel free to ask me if you run into issues, thank you :) |
Beta Was this translation helpful? Give feedback.
@hbanand Thanks for reaching out! Please check out this for further details: #277 (comment)
In brief, there might be issues if you use Ollama's models, since LLM output might not always conform to JSON output. I've already added Ollama support in this branch: https://github.com/Canner/WrenAI/tree/feature/ai-service/add-ollama. Welcome to check it out! Also I think we should customize prompts depending on the model we choose. There is still some space for improvement. I would like to hear your thoughts anyway.
For JSON output broken issue, I am thinking to add a component to help solve this issue: https://github.com/noamgat/lm-format-enforcer. I would also like to know more your thoughts h…