Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

huggingface; fix huggingface_endpoint.py (initialize clients only with supported kwargs) #26378

Merged
merged 4 commits into from
Sep 20, 2024

Conversation

Wauplin
Copy link
Contributor

@Wauplin Wauplin commented Sep 12, 2024

Description

By default, HuggingFaceEndpoint instantiates both the InferenceClient and the AsyncInferenceClient with the "server_kwargs" passed as input. This is an issue as both clients might not support exactly the same kwargs. This has been highlighted in huggingface/huggingface_hub#2522 by @morgandiverrez with the trust_env parameter. In order to make langchain integration future-proof, I do think it's wiser to forward only the supported parameters to each client. Parameters that are not supported are simply ignored with a warning to the user. From a huggingface_hub maintenance perspective, this allows us much more flexibility as we are not constrained to support the exact same kwargs in both clients.

Issue

huggingface/huggingface_hub#2522

Dependencies

None

Twitter

https://x.com/Wauplin

By default, `HuggingFaceEndpoint` instantiates both the `InferenceClient` and the `AsyncInferenceClient` with the `"server_kwargs"` passed as input. This is an issue as both clients might not support exactly the same kwargs.  This has been highlighted in huggingface/huggingface_hub#2522 by @morgandiverrez with the `trust_env` parameter. In order to make `langchain` integration future-proof, I do think it's wiser to forward only the supported parameters to each client. Parameters that are not supported are simply ignored with a warning to the user. From a `huggingface_hub` maintenance perspective, this allows us much more flexibility as we are not constrained to support the exact same kwargs in both clients.
@efriis efriis added the partner label Sep 12, 2024
@efriis efriis self-assigned this Sep 12, 2024
@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Sep 12, 2024
Copy link

vercel bot commented Sep 12, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Sep 17, 2024 9:36pm

@dosubot dosubot bot added langchain Related to the langchain package 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Sep 12, 2024
@efriis
Copy link
Member

efriis commented Sep 17, 2024

Hey @Wauplin ! Just tagging in @Jofthomas from your team who implemented the package - excited to have more folks from HF helping out with the integration package!

Overall lgtm, and just want to make sure this makes sense to him too

@ccurme ccurme removed the langchain Related to the langchain package label Sep 20, 2024
@efriis efriis merged commit a2023a1 into langchain-ai:master Sep 20, 2024
19 checks passed
sfc-gh-nmoiseyev pushed a commit to sfc-gh-nmoiseyev/langchain that referenced this pull request Sep 21, 2024
…h supported kwargs) (langchain-ai#26378)

## Description

By default, `HuggingFaceEndpoint` instantiates both the
`InferenceClient` and the `AsyncInferenceClient` with the
`"server_kwargs"` passed as input. This is an issue as both clients
might not support exactly the same kwargs. This has been highlighted in
huggingface/huggingface_hub#2522 by
@morgandiverrez with the `trust_env` parameter. In order to make
`langchain` integration future-proof, I do think it's wiser to forward
only the supported parameters to each client. Parameters that are not
supported are simply ignored with a warning to the user. From a
`huggingface_hub` maintenance perspective, this allows us much more
flexibility as we are not constrained to support the exact same kwargs
in both clients.

## Issue

huggingface/huggingface_hub#2522

## Dependencies

None

## Twitter 

https://x.com/Wauplin

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Sheepsta300 pushed a commit to Sheepsta300/langchain that referenced this pull request Oct 1, 2024
…h supported kwargs) (langchain-ai#26378)

## Description

By default, `HuggingFaceEndpoint` instantiates both the
`InferenceClient` and the `AsyncInferenceClient` with the
`"server_kwargs"` passed as input. This is an issue as both clients
might not support exactly the same kwargs. This has been highlighted in
huggingface/huggingface_hub#2522 by
@morgandiverrez with the `trust_env` parameter. In order to make
`langchain` integration future-proof, I do think it's wiser to forward
only the supported parameters to each client. Parameters that are not
supported are simply ignored with a warning to the user. From a
`huggingface_hub` maintenance perspective, this allows us much more
flexibility as we are not constrained to support the exact same kwargs
in both clients.

## Issue

huggingface/huggingface_hub#2522

## Dependencies

None

## Twitter 

https://x.com/Wauplin

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature partner size:S This PR changes 10-29 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

6 participants