-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QA Documentation: I got error just copy and pasting documentation #10210
Comments
Pinging @joeddav on this one, since he wrote this tutorial :-) |
Figured it out. def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
for i in range(len(answers)):
start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))
end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1))
# if start position is None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
end_positions[-1] = tokenizer.model_max_length
encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) |
Closed by #10217 |
Thank you @joeddav the posted code works perfectly. |
Sorry for bothering you @joeddav again, I have a question related to the code posted by you here. Kind regards, |
Environment info
transformers
version: 4.3.1Who can help
@sgugger
Information
I am trying the to train a QA model following the huggingface documentation, I just copied and pasted the code in my machine (and in Colab) but I was not able to proceed in the training phase because I got None value.
To reproduce
Steps to reproduce the behavior:
File "/home/andrea/PycharmProjects/qa-srl/test.py", line 78, in __getitem__ return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} RuntimeError: Could not infer dtype of NoneType
The text was updated successfully, but these errors were encountered: