-
this line of code makes it impossible to use this model as a (non trainable) layer of other model. Line 124 in 7cf5138 also please check discussion here https://discuss.pytorch.org/t/runtimeerror-element-0-of-tensors-does-not-require-grad-and-does-not-have-a-grad-fn-when-training-from-examples/107816/6 I have to manually enable it to make it work. Just want to know if there is a specific reason to disable gradient calculation. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Well, basically these utils were designed mostly with inference in mind I do not remember why I went with the former instead of the latter, but probably because I just invoke model simple like Anyway you can just override the |
Beta Was this translation helpful? Give feedback.
-
When github fixes its discussion migration bug I will transfer this issue to a discussion Will close this for now, I cannot really add anything to this |
Beta Was this translation helpful? Give feedback.
Well, basically these utils were designed mostly with inference in mind
Usually when we write inference code we either disable gradients altogether or use with
with torch.no_grad()
I do not remember why I went with the former instead of the latter, but probably because I just invoke model simple like
out = model(input)
and I did not want to write an extra function hiding an extra level of tabulation from the concise examplesAnyway you can just override the
init_jit_model
function in your use case and that is it