Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip failing test_save_load_low_cpu_mem_usage tests #29110

Conversation

amyeroberts
Copy link
Collaborator

@amyeroberts amyeroberts commented Feb 19, 2024

What does this PR do?

Related to #29043 and #28948

Fixes more failing model tests on main which weren't picked up by the test fetcher cc @ydshieh cc @ArthurZucker

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually in favor of reverting the PR to have a single one that tests all of this and not include it in the release as per the review I made on the open PR!

@ydshieh
Copy link
Collaborator

ydshieh commented Feb 20, 2024

I also in favor of reverting the original PR #28948 - if that PR is not something urgent to be in main. Adding this many of skip is already a heavy burden, then clean it up later is another one.

(The test fetcher is indeed not ideal, but there is a trade-off and we can decide how to improve it.)

However, I am OK for this PR to be merged if we decide so - the job is already done (the time is spent) anyway.

Copy link
Collaborator

@ydshieh ydshieh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for taking the time Amy!

@amyeroberts
Copy link
Collaborator Author

@ydshieh @ArthurZucker I agree, let's revert for now.

The main reason is I don't trust the testing suite at the moment. @hackyon opened another, more complete PR to address the failing models #29118 which believe were retrieved from explicitly testing test_save_load_low_cpu_mem_usage for all models. Importantly, there were more models covered there than this PR addresses - highlighting lack of coverage and failing tests not being run.

However, because we're just about to release, let's err on the side of caution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants