Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got ValueError when output_hidden_states=True with eval_accumulation_steps #11055

Closed
2 of 4 tasks
jungwhank opened this issue Apr 5, 2021 · 4 comments · Fixed by #11071
Closed
2 of 4 tasks

Got ValueError when output_hidden_states=True with eval_accumulation_steps #11055

jungwhank opened this issue Apr 5, 2021 · 4 comments · Fixed by #11071

Comments

@jungwhank
Copy link
Contributor

Environment info

  • transformers version: 4.4.2
  • Platform: Colab
  • Python version: 3.7
  • PyTorch version (GPU?): 1.8.1+cu101
  • Tensorflow version (GPU?):
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help

@sgugger @LysandreJik

Information

Model I am using (Bert, XLNet ...):

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below)

To reproduce

Hello
I'm trying to using fine-tuning code with my own model, and I got ValueError like below when evaluate with eval_accumulation_steps in TraningArguments and output_hidden_states=True in model config.

If I do output_hidden_states=False(as I know, it is default), the error disappears.
I don't need output_hidden_states but, I report this because I think it should be work, even when output_hidden_states=True.

I share my colab with bug report with official example of transformers glue example.

Thanks in advance!

ValueError                                Traceback (most recent call last)
<ipython-input-26-f245b31d31e3> in <module>()
----> 1 trainer.evaluate()

/usr/local/lib/python3.7/dist-packages/transformers/trainer_pt_utils.py in _nested_set_tensors(self, storage, arrays)
    392             else:
    393                 storage[self._offsets[i] : self._offsets[i] + slice_len, : arrays.shape[1]] = arrays[
--> 394                     i * slice_len : (i + 1) * slice_len
    395                 ]
    396         return slice_len

ValueError: could not broadcast input array from shape (16,22,768) into shape (16,19,768)

Expected behavior

@frankhart2018
Copy link

@jungwhank Can you change permission to colab notebook? Not able to view it.

@jungwhank
Copy link
Contributor Author

@frankhart2018 Sorry I changed the link

@sgugger
Copy link
Collaborator

sgugger commented Apr 5, 2021

I can reproduce and see where this is coming from. The fix is not particularly easy, will try to have something ready by the end of the week.

Thanks for flagging this and for the nice reproducer!

@sgugger
Copy link
Collaborator

sgugger commented Apr 5, 2021

Ok, the PR mentioned above fixes the problem. Note that for the notebook to run, the compute_metrics function needs to be changed a bit: the predictions will be a tuple and the argmax will fail. Adding the line

if isinstance(predictions, (tuple, list)):
    predictions = predictions[0]

inside solves that problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants