Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use inference types dataclasses everywhere #2063

Open
2 of 9 tasks
Wauplin opened this issue Feb 29, 2024 · 0 comments
Open
2 of 9 tasks

Use inference types dataclasses everywhere #2063

Wauplin opened this issue Feb 29, 2024 · 0 comments

Comments

@Wauplin
Copy link
Contributor

Wauplin commented Feb 29, 2024

We now have specs for all inference types with the goal of unifying APIs across the HF ecosystem, both in Inference APIs and transformers/diffusers/etc. pipelines. Since #2036, we have auto-generated Python dataclasses meant to be reused as much as possible. At the moment the output types are used for most InferenceClient outputs. This issue is meant to list remaining things to address.

  • use new types for text-generation output (requires specs for stream param)
  • remove all pydantic from our codebase (not in use anymore)
  • better auto-generated docstrings (especially to document arguments)
  • generate input parameters for InferenceClient from input types
  • generate InferenceClient docstrings from input types
  • proper generation script in CI (open PRs automatically on update?)
  • reuse in transformers pipelines
  • reuse in diffusers pipelines
  • ?

cc @SBrandeis @julien-c

Related:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant