-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to enable grad during validation #13948
Comments
The decorator is working properly, the issues is that you are asserting |
Same issue here, after updating to version 1.7. |
Same issue. In my case, I perform FGSM/PGD attacks in the validation and test steps. Following #201, gradients work fine with When the error happens, it produces the message below. Does this indicate
Update: This only occurs in 1.7 and does not happen with 1.6.5. |
Oh. I know the cause. It's because You will need to do something like: with torch.inference_mode(False):
grad_feats = feats.clone().requires_grad_()
layer = torch.nn.Linear(2, self.out_ch, device=feats.device)
logits = layer(grad_feats[None])
assert logits.requires_grad |
Is there any nice way to update the net (where the parameters are predefined) in the inference mode? @torch.enable_grad()
def forward_and_adapt(self, x):
grad_x = x.clone().requires_grad_()
outputs = self.net(grad_x) # ?
entropys = softmax_entropy(outputs)
loss = entropys.mean()
loss.backward()
... |
Why dp you want to update the net during evaluation? You should use |
We are working on Test-time Domain Adaptation (link), where the network needs to be updated after each batch feeding in an unsupervised fashion. Implementing in training_step could be a solution, but would it cause other problems such as augmentation, random shuffling, or dropout? Nevertheless, I am using the following way, and seems that it works well. @torch.enable_grad()
@torch.inference_mode(False)
def forward_and_adapt(self, x):
grad_x = x.clone().requires_grad_()
outputs = self.net(grad_x) # ?
entropys = softmax_entropy(outputs)
loss = entropys.mean()
loss.backward()
... Thanks for your reply. |
All of that is meant to be in the
Why would it? |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, PyTorch Lightning Team! |
fixed by: #15034 you can now set |
Hi~ When will release v1.8? |
🐛 Bug
Even with
@torch.enable_grad
, gradients aren't enabled during validationTo Reproduce
Expected behavior
Expected to finish without error (
assert logits.requires_grad
should work)Environment
Additional context
The text was updated successfully, but these errors were encountered: