Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move RuntimeError if no provider is given #1053

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

amaloney
Copy link
Collaborator

@amaloney amaloney commented Feb 6, 2025

Moving the RuntimeError closer to when the provider is defined from the args object. I found this error message made a lot more sense than the one found in the try-block when/if I tried to run lumen-ai on the command line with no provider set.

resolves: #1052

@amaloney amaloney requested a review from ahuang11 February 6, 2025 20:22
@amaloney amaloney self-assigned this Feb 6, 2025
Copy link

codecov bot commented Feb 6, 2025

Codecov Report

Attention: Patch coverage is 0% with 2 lines in your changes missing coverage. Please review.

Project coverage is 57.62%. Comparing base (c2404e1) to head (6d76945).

Files with missing lines Patch % Lines
lumen/command/ai.py 0.00% 2 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1053   +/-   ##
=======================================
  Coverage   57.62%   57.62%           
=======================================
  Files         109      109           
  Lines       14228    14228           
=======================================
  Hits         8199     8199           
  Misses       6029     6029           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ahuang11
Copy link
Contributor

ahuang11 commented Feb 7, 2025

Thanks for pointing that out.

The reason why it's below is because of detect_provider().

        if llm_model_url and provider and provider != "llama":
            raise ValueError(
                f"Cannot specify both --llm-model-url and --provider {provider!r}. "
                f"Use --llm-model-url to load a model from HuggingFace."
            )
        elif llm_model_url:
            provider = "llama"
        elif not provider:
            provider = LLMConfig.detect_provider()

I think you can just replace this with the more helpful message?

                f"Could not find LLM Provider {provider!r}, valid providers include: {list(LLM_PROVIDERS)}."
            )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Exception location
2 participants