-
-
Notifications
You must be signed in to change notification settings - Fork 850
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pre-loaded configurations #882
Comments
Given the LLM landscape is rapidly changing, providing a good default set of options should help reduce decision fatigue to get started Improve initialization flow during first run - Set Google, Anthropic Chat models too Previously only Offline, Openai chat models could be set during init - Add multiple chat models for each LLM provider Interactively set a comma separated list of models for each provider - Auto add default chat models for each provider in non-interactive model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set - Do not ask for max_tokens, tokenizer for offline models during initialization. Use better defaults inferred in code instead - Explicitly set default chat model to use If unset, it implicitly defaults to using the first chat model. Make it explicit to reduce this confusion Resolves #882
Given the LLM landscape is rapidly changing, providing a good default set of options should help reduce decision fatigue to get started Improve initialization flow during first run - Set Google, Anthropic Chat models too Previously only Offline, Openai chat models could be set during init - Add multiple chat models for each LLM provider Interactively set a comma separated list of models for each provider - Auto add default chat models for each provider in non-interactive model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set - Do not ask for max_tokens, tokenizer for offline models during initialization. Use better defaults inferred in code instead - Explicitly set default chat model to use If unset, it implicitly defaults to using the first chat model. Make it explicit to reduce this confusion Resolves #882
Given the LLM landscape is rapidly changing, providing a good default set of options should help reduce decision fatigue to get started Improve initialization flow during first run - Set Google, Anthropic Chat models too Previously only Offline, Openai chat models could be set during init - Add multiple chat models for each LLM provider Interactively set a comma separated list of models for each provider - Auto add default chat models for each provider in non-interactive model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set - Do not ask for max_tokens, tokenizer for offline models during initialization. Use better defaults inferred in code instead - Explicitly set default chat model to use If unset, it implicitly defaults to using the first chat model. Make it explicit to reduce this confusion Resolves #882
## Improve - Intelligently initialize a decent default set of chat model options - Create non-interactive mode. Auto set default server configuration on first run via Docker ## Fix - Make RapidOCR dependency optional as flaky requirements causing docker build failures - Set default openai text to image model correctly during initialization ## Details Improve initialization flow during first run to remove need to configure Khoj: - Set Google, Anthropic Chat models too Previously only Offline, Openai chat models could be set during init - Add multiple chat models for each LLM provider Interactively set a comma separated list of models for each provider - Auto add default chat models for each provider in non-interactive model if the `{OPENAI,GEMINI,ANTHROPIC}_API_KEY' env var is set - Used when server run via Docker as user input cannot be processed to configure server during first run - Do not ask for `max_tokens', `tokenizer' for offline models during initialization. Use better defaults inferred in code instead - Explicitly set default chat model to use If unset, it implicitly defaults to using the first chat model. Make it explicit to reduce this confusion Resolves #882
Hey @sm18lr88, thanks for opening this issue and pointing out a good area for improvement! I made some changes to address this:
We should add a decent set of default embedding models as well for folks to get started (e.g include a decent default multi-lingual embedding model). This can be done as a follow-up. Do try out the newer Khoj docker setup. Hopefully it goes better this time 🤞🏽 |
not sure if it is easier now
|
+1 Getting the same error |
Woops, thanks for the notice! Just made the newly released code sandbox docker image at ghcr.io/khoj-ai/terrarium public. So Background: |
The setup for the locally hosted Docker version is still complicated (it used to be way simpler in the earlier version of Khoj), and the instructions are not very clear on how to load various models, embedding engines, etc.
Consider offering pre-loaded choices for LLMs and Embedding choices from a drop-down menu.
For example, a drop-down menu in the chat models where users can simply select the LLM provider, model, and input their API key.
For embeddings likewise: a list of all the various models users can use, from a simple drop-down menu rather than having to type it in.
I keep trying this app here and there but I only ever managed to get the initial version to ever work.
Thanks.
The text was updated successfully, but these errors were encountered: