Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: CLI conveniences (add-on to #674) #675

Merged
merged 5 commits into from
Dec 22, 2023
Merged

fix: CLI conveniences (add-on to #674) #675

merged 5 commits into from
Dec 22, 2023

Conversation

cpacker
Copy link
Collaborator

@cpacker cpacker commented Dec 22, 2023

Please describe the purpose of this pull request

Improving the workflow a bit more.

  • Only raise errors about openai or azure keys missing once the user selects openai/azure as an option
    • Do this for both the LLM endpoint step and the embeddings endpoint step
      • e.g. if the user selects localllm and no openai key is specified, don't throw an error until they try to use openai for embeddings
    • For openai, allow the user to provide the key in real-time (if missing)
  • When key-related errors happen in configure, they're thrown as ValueErrors
    • Catch the ValueErrors and print them cleanly vs a fat stacktrace

Tested: example runs of memgpt configure

User does not have openai key set, and tries to do openai LLM

% memgpt configure
? Select LLM inference provider: openai
? Enter your OpenAI API key (starts with 'sk-', see https://platform.openai.com/api-
keys): banana
? Override default endpoint: https://api.openai.com/v1
? Select default model (recommended: gpt-4): gpt-4
? Select embedding provider: openai
? Select default preset: memgpt_chat
? Select default persona: sam_pov
? Select default human: basic
? Select storage backend for archival data: local
Saving config to /Users/loaner/.memgpt/config

User does not have openai key set, tried to do local LLM + openai embeddings

 % memgpt configure
? Select LLM inference provider: local
? Select LLM backend (select 'openai' if you have an OpenAI compatible proxy): lmstu
dio
? Enter default endpoint: http://localhost:1234
? Select default model wrapper (recommended: chatml): chatml
? Select your model's context window (for Mistral 7B models, this is probably 8k / 8
192): 8192
? Select embedding provider: openai
? Enter your OpenAI API key (starts with 'sk-', see https://platform.openai.com/api-
keys): banana
? Select default preset: memgpt_chat
? Select default persona: sam_pov
? Select default human: basic
? Select storage backend for archival data: local
Saving config to /Users/loaner/.memgpt/config

User has no config, but openai is set in the env vars

% memgpt configure            
? Select LLM inference provider: openai
? Override default endpoint: https://api.openai.com/v1
? Select default model (recommended: gpt-4): gpt-4
? Select embedding provider: openai
? Select default preset: memgpt_chat
? Select default persona: sam_pov
? Select default human: basic
? Select storage backend for archival data: local
Saving config to /Users/loaner/.memgpt/config

User has no azure vars set, tries to use azure LLM (immediate error)

 % memgpt configure            
? Select LLM inference provider: azure
Missing environment variables for Azure (see https://memgpt.readme.io/docs/endpoints#azure-openai). Please set then run `memgpt configure` again.

User has no azure vars set, tries to use local LLM but azure embedding

 % memgpt configure
? Select LLM inference provider: local
? Select LLM backend (select 'openai' if you have an OpenAI compatible proxy): lmstu
dio
? Enter default endpoint: http://localhost:1234
? Select default model wrapper (recommended: chatml): chatml
? Select your model's context window (for Mistral 7B models, this is probably 8k / 8
192): 8192
? Select embedding provider: azure
Missing environment variables for Azure (see https://memgpt.readme.io/docs/endpoints#azure-openai). Please set then run `memgpt configure` again.

User has only azure LLM vars set, tries to use azure LLM + embeddings (expected success, since in get_azure_credentials we default from LLM->embeddings)

% memgpt configure
? Select LLM inference provider: azure
? Select default model (recommended: gpt-4): gpt-4-1106-preview
? Select embedding provider: azure
? Select default preset: memgpt_chat
? Select default persona: sam_pov
? Select default human: basic
? Select storage backend for archival data: local
📖 Saving config to /Users/loaner/.memgpt/config

User has only azure embedding vars set, tries to use both LLM + embeddings

% memgpt configure
? Select LLM inference provider: azure
Missing environment variables for Azure (see https://memgpt.readme.io/docs/endpoints#azure-openai). Please set then run `memgpt configure` again.

@cpacker cpacker requested a review from sarahwooders December 22, 2023 05:02
openai_api_key = questionary.text(
"Enter your OpenAI API key (starts with 'sk-', see https://platform.openai.com/api-keys):"
).ask()
config.save()
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sarahwooders is this the correct way to save the key back to the file in "real-time" if the user provides it here?

@cpacker cpacker merged commit 09c7fa7 into main Dec 22, 2023
6 checks passed
@cpacker cpacker deleted the cli-conveniences branch December 22, 2023 18:34
sarahwooders pushed a commit that referenced this pull request Dec 26, 2023
* for openai, check for key and if missing allow user to pass it, for azure, throw error if the key isn't present

* correct prior checking of azure to be more strict, added similar checks at the embedding endpoint config stage

* forgot to override value in config before saving

* clean up the valuerrors from missing keys so that no stacktrace gets printed, make success text green to match others
norton120 pushed a commit to norton120/MemGPT that referenced this pull request Feb 15, 2024
* for openai, check for key and if missing allow user to pass it, for azure, throw error if the key isn't present

* correct prior checking of azure to be more strict, added similar checks at the embedding endpoint config stage

* forgot to override value in config before saving

* clean up the valuerrors from missing keys so that no stacktrace gets printed, make success text green to match others
mattzh72 pushed a commit that referenced this pull request Oct 9, 2024
* for openai, check for key and if missing allow user to pass it, for azure, throw error if the key isn't present

* correct prior checking of azure to be more strict, added similar checks at the embedding endpoint config stage

* forgot to override value in config before saving

* clean up the valuerrors from missing keys so that no stacktrace gets printed, make success text green to match others
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant