-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ReformerForQuestionAnswering : int() argument must be a string, a bytes-like object or a number, not 'NoneType' #10370
Comments
Hey @harikc456, The problem is that the model is not put into training mode. If you run the following code: from transformers import ReformerTokenizer, ReformerForQuestionAnswering
from transformers.models.reformer.modeling_reformer import PositionEmbeddings
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
# change to position embeddings to prevent error
model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward() you can see that the code runs without error. |
Hello, I've just come across the same issue. I tried the code below, from transformers import ReformerTokenizer, ReformerForQuestionAnswering
from transformers.models.reformer.modeling_reformer import PositionEmbeddings
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
# change to position embeddings to prevent error
model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward() and got the following error message.
I first tried to use: tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
model = AutoModelForSequenceClassification.from_pretrained(
"google/reformer-crime-and-punishment", return_dict=True
) It failed, then I found this issue and added:
However, the same error occurs.
Maybe the problem is that the version of Transformers I am using for this is old? Thank you in advance. |
It seems that the same issue occurs when I updated the transformers to the latest stable version via pip.
Is the problem depending on the version of some other library? |
Excuse me for my frequent posting. Instead of overwriting from transformers import ReformerTokenizer, ReformerForQuestionAnswering
from transformers.models.reformer.modeling_reformer import PositionEmbeddings
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
# # change to position embeddings to prevent error
# model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)
model.train()
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward() The different error message is shown, but it seems can be treated by just doing padding.
I'm now trying padding the input, and it seems working.
I apologize if this is not an appropriate solution. |
We could maybe add a better error message that fires when Reformer is not in training mode, but one runs |
@patrickvonplaten |
Hi @patrickvonplaten, |
Environment info
transformers
version:Who can help
@patrickvonplaten
Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
ReformerForQuestionAnswering
model.The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
Performing backward on the loss throwing an error.
Minimal code to reproduce the error.
Error Traceback
From debugging, I believe that the error was caused because the
self.feed_forward_seed
inReformerLayer
class isNone
.I have tried the same code with Longformer and it was working perfectly.
Expected behavior
loss.backward()
running properly.The text was updated successfully, but these errors were encountered: