-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
422 Unprocessable Entity #32
Comments
Hi, Can you please let us know what server and model you are using? E.g. LLaMA-3 on text-generation-inference etc. |
One thing to check is which port you are serving If that doesn't help, you can enable LiteLLM's verbose logging (https://github.com/stanford-oval/WikiChat/blob/main/llm_config.yaml#L17) and paste the full log here, to help us with troubleshooting. |
I use vllm to deploy a local LLM. How should I modify the ”local: huggingface/local" field in the llm_config.yaml file? I tried to change it to the name set when vllm was deployed, but it reported an error that the model does not exist. If I don't modify it, huggingface will report an error。 |
I just tested, and it does not seem to work with vLLM. I will need to look into it more closely. |
I get a “422 Unprocessable Entity” when calling a local LLM service and I don't know what's causing it。
The text was updated successfully, but these errors were encountered: