Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3.6.7章节运行train_ch3函数报错 #98

Open
cdrj opened this issue Feb 12, 2020 · 0 comments
Open

3.6.7章节运行train_ch3函数报错 #98

cdrj opened this issue Feb 12, 2020 · 0 comments

Comments

@cdrj
Copy link

cdrj commented Feb 12, 2020

bug描述
num_epochs, lr = 5, 0.1
d2l.train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size, [W, b], lr)


RuntimeError Traceback (most recent call last)
in
2
3
----> 4 d2l.train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size, [W, b], lr)

~\动手深度学习\d2lzh_pytorch\utils.py in train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr, optimizer)
132 param.grad.data.zero_()
133
--> 134 l.backward()
135 if optimizer is None:
136 sgd(params, lr, batch_size)

C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
193 products. Defaults to False.
194 """
--> 195 torch.autograd.backward(self, gradient, retain_graph, create_graph)
196
197 def register_hook(self, hook):

C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd_init_.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
97 Variable._execution_engine.run_backward(
98 tensors, grad_tensors, retain_graph, create_graph,
---> 99 allow_unreachable=True) # allow_unreachable flag
100
101

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

版本信息
pytorch:
torchvision:1.4.0
torchtext:
...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant