Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy, Precision, Recall raises RuntimeError when device="cuda" #917

Closed
Voleraa opened this issue Mar 28, 2022 · 2 comments
Closed

Accuracy, Precision, Recall raises RuntimeError when device="cuda" #917

Voleraa opened this issue Mar 28, 2022 · 2 comments
Labels
question Further information is requested
Milestone

Comments

@Voleraa
Copy link

Voleraa commented Mar 28, 2022

🐛 Bug

Hi,

when I run the following code,

import torch
import torchmetrics

preds  = torch.tensor([[0.6,0.9,0.3,0,0,0,0,0.6], 
                       [0.1,0.6,0.1,0,0,0,0,0]])
target = torch.tensor([[0,1,1,0,0,0,0,0],
                       [1,1,0,0,0,0,0,1]])
accuracy = torchmetrics.Accuracy()
accuracy(preds.cuda(), target.cuda())

I get the following RuntimeError:

RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_54291/197549698.py in <module>
      7                        [1,1,0,0,0,0,0,1]])
      8 accuracy = torchmetrics.Accuracy()
----> 9 accuracy(preds.cuda(), target.cuda())

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

~/.local/lib/python3.8/site-packages/torchmetrics/metric.py in forward(self, *args, **kwargs)
    204 
    205         # global accumulation
--> 206         self.update(*args, **kwargs)
    207 
    208         if self.compute_on_step:

~/.local/lib/python3.8/site-packages/torchmetrics/metric.py in wrapped_func(*args, **kwargs)
    265             self._update_called = True
    266             with torch.set_grad_enabled(self._enable_grad):
--> 267                 return update(*args, **kwargs)
    268 
    269         return wrapped_func

~/.local/lib/python3.8/site-packages/torchmetrics/classification/accuracy.py in update(self, preds, target)
    258             # Update states
    259             if self.reduce != "samples" and self.mdmc_reduce != "samplewise":
--> 260                 self.tp += tp
    261                 self.fp += fp
    262                 self.tn += tn

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

This error also occurs when I use Recall and Precision.

Environment

  • OS: Linux
  • Python Version: 3.8
  • PyTorch Version: 1.8.1
@Voleraa Voleraa added bug / fix Something isn't working help wanted Extra attention is needed labels Mar 28, 2022
@github-actions
Copy link

Hi! thanks for your contribution!, great first issue!

@Borda Borda added question Further information is requested and removed bug / fix Something isn't working help wanted Extra attention is needed labels Mar 28, 2022
@Borda
Copy link
Member

Borda commented Mar 28, 2022

Hi, in such case you also need to move the metric to the device

accuracy = torchmetrics.Accuracy().cuda()
accuracy(preds.cuda(), target.cuda())

if you want to have a seamless experience, please use the PytorchLightning logging feature
https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#automatic-logging

@Borda Borda closed this as completed Mar 28, 2022
@Borda Borda added this to the v0.8 milestone May 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants