Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU support in BootStrapper #455

Closed
frcnt opened this issue Aug 17, 2021 · 3 comments · Fixed by #462
Closed

GPU support in BootStrapper #455

frcnt opened this issue Aug 17, 2021 · 3 comments · Fixed by #462
Assignees
Labels
bug / fix Something isn't working help wanted Extra attention is needed

Comments

@frcnt
Copy link

frcnt commented Aug 17, 2021

🐛 Bug

BootStrapper yields a device error when the predicted and target tensors, along with the metric are on GPU.

Regardless of the device on which the tensors and the metric are hosted, this call to _bootstrap_sampler returns indices hosted on the CPU.
The device error is later thrown when calling torch.index_select.

To Reproduce

Steps to reproduce the behavior:

  1. Run here-below code sample

Stack traces produced

Traceback (most recent call last):
File "/home/fco/sentiment-analysis/sentiment_analysis/training/metrics.py", line 110, in
mae.update(preds, targets)
File "/home/fco/anaconda3/envs/sentiment/lib/python3.8/site-packages/torchmetrics/metric.py", line 248, in wrapped_func
return update(*args, **kwargs)
File "/home/fco/anaconda3/envs/sentiment/lib/python3.8/site-packages/torchmetrics/wrappers/bootstrapping.py", line 153, in update
new_args = apply_to_collection(args, Tensor, torch.index_select, dim=0, index=sample_idx)
File "/home/fco/anaconda3/envs/sentiment/lib/python3.8/site-packages/torchmetrics/utilities/data.py", line 197, in apply_to_collection
return elem_type([apply_to_collection(d, dtype, function, *args, **kwargs) for d in data])
File "/home/fco/anaconda3/envs/sentiment/lib/python3.8/site-packages/torchmetrics/utilities/data.py", line 197, in
return elem_type([apply_to_collection(d, dtype, function, *args, **kwargs) for d in data])
File "/home/fco/anaconda3/envs/sentiment/lib/python3.8/site-packages/torchmetrics/utilities/data.py", line 187, in apply_to_collection
return function(data, *args, **kwargs)
RuntimeError: Input, output and indices must be on the current device

Code sample

import torchmetrics as tm

device = "cuda:0"
mae = tm.BootStrapper(tm.MeanAbsoluteError()).to(device)
preds = torch.tensor([-1.5, -.5, .5, 1.5]).to(device)
targets = torch.tensor([-2.0, -1.0, 1.0, 2.0]).to(device)
mae.update(preds, targets)
v = mae.compute()

Expected behavior

The same behaviour as when executing the above piece of code on CPU.

import torchmetrics as tm

device = "cpu"
mae = tm.BootStrapper(tm.MeanAbsoluteError()).to(device)
preds = torch.tensor([-1.5, -.5, .5, 1.5]).to(device)
targets = torch.tensor([-2.0, -1.0, 1.0, 2.0]).to(device)
mae.update(preds, targets)
v = mae.compute()

Environment

  • PyTorch Version (e.g., 1.0): 1.8.1
  • OS (e.g., Linux): Linux
  • How you installed PyTorch (conda, pip, source): conda
  • Build command you used (if compiling from source): /
  • Python version: 3.8
  • CUDA/cuDNN version: 10.2
  • GPU models and configuration: RTX 2080
  • Any other relevant information: torchmetrics v0.5.0

Suggestion

After calling _bootstrap_sampler, the indices should be moved on the same device as the metric.

@frcnt frcnt added bug / fix Something isn't working help wanted Extra attention is needed labels Aug 17, 2021
@github-actions
Copy link

Hi! thanks for your contribution!, great first issue!

@Borda
Copy link
Member

Borda commented Aug 18, 2021

@frcnt good catch, are you interested in sending a fix? @SkafteNicki ay assist 🐰

@SkafteNicki
Copy link
Member

I took a look at it yesterday, can send a fix soon :]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants