-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BatchFeature.to() supports non-tensor keys #33918
Conversation
Thank you @Rocketknight1 . 2 questions:
|
@ydshieh good spot, I fixed it for all three! We could consider rewriting |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
a9ac41e
to
4d796f7
Compare
cc @ydshieh I changed the PR to update |
cc @ArthurZucker for core maintainer review! This PR is simple: It patches |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Rocketknight1!
Are there other areas that should be updated? Arthur tells me about Pixtral as an example |
@LysandreJik I checked the codebase for methods overriding The exceptions were:
In both cases, I updated their methods to make sure they only call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perfect, thanks!
* Fix issue in oneformer preprocessing * [run slow] oneformer * [run_slow] oneformer * Make the same fixes in DQA and object detection pipelines * Fix BatchFeature.to() instead * Revert pipeline-specific changes * Add the same check in Pixtral's methods * Add the same check in BatchEncoding * make sure torch is imported
* Fix issue in oneformer preprocessing * [run slow] oneformer * [run_slow] oneformer * Make the same fixes in DQA and object detection pipelines * Fix BatchFeature.to() instead * Revert pipeline-specific changes * Add the same check in Pixtral's methods * Add the same check in BatchEncoding * make sure torch is imported
* Fix issue in oneformer preprocessing * [run slow] oneformer * [run_slow] oneformer * Make the same fixes in DQA and object detection pipelines * Fix BatchFeature.to() instead * Revert pipeline-specific changes * Add the same check in Pixtral's methods * Add the same check in BatchEncoding * make sure torch is imported
This PR fixes a bug in the preprocessing for several pipelines. The pipelines were calling
.to()
on aBatchFeature
which had a string feature, which caused an error. This update was also requested internally in Slack!cc @ydshieh