-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[InferenceClient / langchain-HuggingFaceEndpoint] impossible to used Langchain HuggingFaceEndpoint whith trust_env=True #2522
Labels
bug
Something isn't working
Comments
hello, @Wauplin what do you think about this PR ? |
Wauplin
added a commit
to Wauplin/langchain
that referenced
this issue
Sep 12, 2024
By default, `HuggingFaceEndpoint` instantiates both the `InferenceClient` and the `AsyncInferenceClient` with the `"server_kwargs"` passed as input. This is an issue as both clients might not support exactly the same kwargs. This has been highlighted in huggingface/huggingface_hub#2522 by @morgandiverrez with the `trust_env` parameter. In order to make `langchain` integration future-proof, I do think it's wiser to forward only the supported parameters to each client. Parameters that are not supported are simply ignored with a warning to the user. From a `huggingface_hub` maintenance perspective, this allows us much more flexibility as we are not constrained to support the exact same kwargs in both clients.
efriis
added a commit
to langchain-ai/langchain
that referenced
this issue
Sep 20, 2024
…h supported kwargs) (#26378) ## Description By default, `HuggingFaceEndpoint` instantiates both the `InferenceClient` and the `AsyncInferenceClient` with the `"server_kwargs"` passed as input. This is an issue as both clients might not support exactly the same kwargs. This has been highlighted in huggingface/huggingface_hub#2522 by @morgandiverrez with the `trust_env` parameter. In order to make `langchain` integration future-proof, I do think it's wiser to forward only the supported parameters to each client. Parameters that are not supported are simply ignored with a warning to the user. From a `huggingface_hub` maintenance perspective, this allows us much more flexibility as we are not constrained to support the exact same kwargs in both clients. ## Issue huggingface/huggingface_hub#2522 ## Dependencies None ## Twitter https://x.com/Wauplin --------- Co-authored-by: Erick Friis <erick@langchain.dev>
sfc-gh-nmoiseyev
pushed a commit
to sfc-gh-nmoiseyev/langchain
that referenced
this issue
Sep 21, 2024
…h supported kwargs) (langchain-ai#26378) ## Description By default, `HuggingFaceEndpoint` instantiates both the `InferenceClient` and the `AsyncInferenceClient` with the `"server_kwargs"` passed as input. This is an issue as both clients might not support exactly the same kwargs. This has been highlighted in huggingface/huggingface_hub#2522 by @morgandiverrez with the `trust_env` parameter. In order to make `langchain` integration future-proof, I do think it's wiser to forward only the supported parameters to each client. Parameters that are not supported are simply ignored with a warning to the user. From a `huggingface_hub` maintenance perspective, this allows us much more flexibility as we are not constrained to support the exact same kwargs in both clients. ## Issue huggingface/huggingface_hub#2522 ## Dependencies None ## Twitter https://x.com/Wauplin --------- Co-authored-by: Erick Friis <erick@langchain.dev>
As explained in #2525, I'll close this issue without merging in favor of langchain-ai/langchain#26378 that just got merged. Thanks again for reporting @morgandiverrez 🤗 |
Sheepsta300
pushed a commit
to Sheepsta300/langchain
that referenced
this issue
Oct 1, 2024
…h supported kwargs) (langchain-ai#26378) ## Description By default, `HuggingFaceEndpoint` instantiates both the `InferenceClient` and the `AsyncInferenceClient` with the `"server_kwargs"` passed as input. This is an issue as both clients might not support exactly the same kwargs. This has been highlighted in huggingface/huggingface_hub#2522 by @morgandiverrez with the `trust_env` parameter. In order to make `langchain` integration future-proof, I do think it's wiser to forward only the supported parameters to each client. Parameters that are not supported are simply ignored with a warning to the user. From a `huggingface_hub` maintenance perspective, this allows us much more flexibility as we are not constrained to support the exact same kwargs in both clients. ## Issue huggingface/huggingface_hub#2522 ## Dependencies None ## Twitter https://x.com/Wauplin --------- Co-authored-by: Erick Friis <erick@langchain.dev>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
HuggingFaceEndpoint in the partners libs HuggingFace in the Langchain librairy cannot be used with
trust_env = True/False
.When we used the
server_kwargs
for settrust_env
in client, this libs create AsyncInferenceClient and InferenceClient everytime, but InferenceClient everytime, but Client and AsyncInferenceClient doesn't have same parameter (trust_env in async and not in InferenceClient everytime, but InferenceClient everytime, but Client for exemple) .if we used
server_kwargs
, we need to have same parameter in InferenceClient everytime, but Client doesn't havetrust_env
parameter.I can fixed this in a PR by add
trust_env
parameterReproduction
from langchain_huggingface .llms import HuggingFaceEndpoint
HuggingFaceEndpoint(
endpoint_url="http://localhost:8010/",
max_new_tokens=512,
temperature=0.01,
repetition_penalty=1.03,
huggingfacehub_api_token="my-api-key",
server_kwargs={"trust_env"=True}
}
Logs
System info
The text was updated successfully, but these errors were encountered: