-
Notifications
You must be signed in to change notification settings - Fork 488
cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' #79
Comments
I found out a correct way of assignment of a Tensor as a parameter is: w = torch.nn.Parameter(........) That means lines 43~46 in weight_drop.py should be:
I am not closing this yet though |
This should be in a PR |
hi Loading cached dataset... can anybody help me please?! |
I had the same issue for a moment when my indenting was wrong. (The indenting provided in the previous answer does not match that of the repository.) Perhaps you just need to indent once more for those lines? |
@shirishr Thanks, your fix rocks! |
I'm using Pytorch 1.1 and @shirishr's fix doesn't give the correct results for me. (The behaviors of nn.Parameter and tensor may have changed.) The fix I ended up having is
The root cause of this issue is that if you pass |
Am working with pytorch version 1.0.0.dev20181019 build channel py3.6_cuda9.0.176_cudnn7.1.2_0
When I run
python -u main.py --epochs 500 --data data/wikitext-2 --clip 0.25 --dropouti 0.4 --dropouth 0.2 --nhid 1550 --nlayers 4 --seed 4002 --model QRNN --wdrop 0.1 --batch_size 20 --save WT2.pt
I get this error:
TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected)
Full trace as under:
Traceback (most recent call last):
File "main.py", line 240, in
train()
File "main.py", line 196, in train
output, hidden, rnn_hs, dropped_rnn_hs = model(data, hidden, return_h=True)
File "/home/sam/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/sam/Documents/NLP/awd-lstm-lm/model.py", line 81, in forward
raw_output, new_h = rnn(raw_output, hidden[l])
File "/home/sam/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/sam/anaconda3/envs/py36/lib/python3.6/site-packages/torchqrnn/qrnn.py", line 70, in forward
Y = self.linear(source)
File "/home/sam/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/sam/Documents/NLP/awd-lstm-lm/weight_drop.py", line 46, in forward
self._setweights()
File "/home/sam/Documents/NLP/awd-lstm-lm/weight_drop.py", line 43, in _setweights
setattr(self.module, name_w, w)
File "/home/sam/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 537, in setattr
.format(torch.typename(value), name))
TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected)
The text was updated successfully, but these errors were encountered: