Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_encoder_decoder_model_generate for vision_encoder_deocder is flaky #28841

Closed
4 tasks
amyeroberts opened this issue Feb 2, 2024 · 1 comment · Fixed by #28923
Closed
4 tasks

test_encoder_decoder_model_generate for vision_encoder_deocder is flaky #28841

amyeroberts opened this issue Feb 2, 2024 · 1 comment · Fixed by #28923

Comments

@amyeroberts
Copy link
Collaborator

System Info

transformers 4.38.0dev

Who can help?

@gante

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Hard to reproduce, unfortunately.

Running the single test enough times will trigger a failure:

python -m pytest -v tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::ViT2TrOCR::test_encoder_decoder_model_generate

Fails with:

FAILED tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::ViT2TrOCR::test_encoder_decoder_model_generate - AssertionError: torch.Size([13, 8]) != (13, 20)

Reference CI run: https://app.circleci.com/pipelines/github/huggingface/transformers/83611/workflows/666b01c9-1be8-4daa-b85d-189e670fc168/jobs/1078635/tests#failed-test-0

Expected behavior

Non-flaky behaviour for the tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants