Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

need to re-attempt backoff and yaml imports if the first import attempt fails #820

Merged
merged 3 commits into from
Nov 15, 2023

Conversation

kfsone
Copy link

@kfsone kfsone commented Nov 15, 2023

proxy_server has a try-import-except-pip-import block, but there were a couple of imports missing in the second block.

On running in a clean docker container for the first time from source, one of the imports fails, so it runs pip install. This succeeds, but the yaml module never got imported.

Also, one thing to be aware of is that some modules don't survive this approach - a package I maintain used the same trick, and it worked most of the time, but on some users machines / configs python would remember that a module had been unavailable and it wouldn't load it on the second import :(

#819

…pt fails. not sure which import is missing from requirements
Copy link

vercel bot commented Nov 15, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Nov 15, 2023 6:47pm

@kfsone
Copy link
Author

kfsone commented Nov 15, 2023

To repro issue:

  • bring up docker container FROM ollama,
  • git clone repos,
  • pip install -r requirements.txt,
  • ollama serve && sleep 2 && ollama pull llama2
  • create ollama.yaml (in the failure case, you're not going to actually reach parsing so an empty file is fine),
  • run litellm --config ollama.yaml --model ollama/llama2 or similar,
    -> fails because yaml is not imported (ergo, it reached the second import block)

@krrishdholakia
Copy link
Contributor

lgtm!

@kfsone are you running a docker container with litellm x ollama or is it:

@krrishdholakia krrishdholakia merged commit 95f9c67 into BerriAI:main Nov 15, 2023
1 check passed
@kfsone
Copy link
Author

kfsone commented Nov 16, 2023

@krrishdholakia Actually, for reasons, a single container with both, using my fork of litellm to try adding a few more of the ollama models (mmm that vicuna 16k, and mistral).

https://gist.github.com/kfsone/988ebccdefd2caa4a0adc58533392605

(the args at the top of the dockerfile: I run apt-cacher-ng and a devpi instance on my nas so I never have to pull a package off the wire twice :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants