Skip to content

Error using DynamicSoftMarginLoss #731

Open
@inakierregueab

Description

@inakierregueab

Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 128, 1, 1]] is at version 4; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Activity

inakierregueab

inakierregueab commented on Nov 12, 2024

@inakierregueab
Author

Initial error was: Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

KevinMusgrave

KevinMusgrave commented on Nov 15, 2024

@KevinMusgrave
Owner

Thanks for the bug report. Can you provide a minimal amount of code that we can run to reproduce the error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @KevinMusgrave@inakierregueab

        Issue actions

          Error using DynamicSoftMarginLoss · Issue #731 · KevinMusgrave/pytorch-metric-learning