-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytorch LTS support (1.8.2) or stable (1.11.1) #60
Comments
AFAIK I did not really use the cpu training except while some testing. |
Will report back here once I can confirm gpu training still works. Setting up env for LTS and 1.11.1 this week. |
Still broker with gpu I think? (default) [eziegenbalg@localhost-live pytorch-dnc]$ python ./tasks/adding_task.py -cuda 0 -lr 0.0001 -rnn_type lstm -memory_type sam -nlayer 1 -nhlayer 1 -nhid 100 -dropout 0 -mem_slot 1000 -mem_size 32 -read_heads 1 -sparse_reads 4 -batch_size 20 -optim rmsprop -input_size 3 -sequence_max_length 100 SAM(3, 100, num_hidden_layers=1, nr_cells=1000, read_heads=1, cell_size=32, gpu_id=0)
|
@ixaxaar have you had a chance to see if this works under the new pytorch LTS version? |
Hi, I continue this issue to ask the same thing. Thank you for the repository!! |
Hello!
I was wondering if someone can confirm that this package still runs under pytroch lts or current stable (1.11.1)?
I'm getting a curious error. Note this is for CPU training. Maybe someone can confirm this is only broken under cpu training.
Thank you!
`03:44 $ python ./tasks/adding_task.py -lr 0.0001 -rnn_type lstm -memory_type sam -nlayer 1 -nhlayer 1 -nhid 100 -dropout 0 -mem_slot 1000 -mem_size 32 -read_heads 1 -sparse_reads 4 -batch_size 20 -optim rmsprop -input_size 3 -sequence_max_length 100
Namespace(batch_size=20, check_freq=100, clip=50, cuda=-1, dropout=0.0, input_size=3, iterations=2000, lr=0.0001, mem_size=32, mem_slot=1000, memory_type='sam', nhid=100, nhlayer=1, nlayer=1, optim='rmsprop', read_heads=1, rnn_type='lstm', sequence_max_length=100, sparse_reads=4, summarize_freq=100, temporal_reads=2, visdom=False)
Using CPU.
SAM(3, 100, num_hidden_layers=1, nr_cells=1000, read_heads=1, cell_size=32)
SAM(
(lstm_layer_0): LSTM(35, 100, batch_first=True)
(rnn_layer_memory_shared): SparseMemory(
(interface_weights): Linear(in_features=100, out_features=70, bias=True)
)
(output): Linear(in_features=132, out_features=3, bias=True)
)
Iteration 0/2000
Falling back to FLANN (CPU).
For using faster, GPU based indexes, install FAISS: "conda install faiss-gpu -c pytorch"
Traceback (most recent call last):
File "./tasks/adding_task.py", line 222, in
loss.backward()
File "/home/eziegenbalg/.conda/envs/default/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/eziegenbalg/.conda/envs/default/lib/python3.8/site-packages/torch/autograd/init.py", line 145, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 1000]], which is output 0 of AsStridedBackward, is at version 70; expected version 69 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
^C
(default) ✘-INT ~/pytorch-dnc [master|✚ 2]
03:45 $ `
The text was updated successfully, but these errors were encountered: