-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
XLA support #1466
Comments
Hi! thanks for your contribution!, great first issue! |
Hi @mfatih7 , |
I can give information about the situation in the TPU.
I instantiate an instance using
When the device is GPU it works without any problem. Actually, it is a normal situation. Maybe you need to implement your module without these functions. I can give more information with pleasure. |
To prevent XLA compilations temporarily, I am using the simple function below.
|
Hi @mfatih7, |
hi @SkafteNicki I could not get the warnings with code purely written in the COLAB notebook. I can give more support if needed. Do not forget to select TPU from COLAB settings |
Hi @mfatih7, confmat = ConfusionMatrix(task="binary", num_classes=2, validate_args=False).to(device) ? |
OK but How can I download this version in my COLAB notebook? |
Change |
OK I don't see any recompilations due to torchmetrics now. Will you commit to the main torchmetrics branch? Do you consider making your library accessible without installation on COLAB? |
@mfatih7 so I can explain what I did:
I think we can include the change, but we are not going to officially support XLA for now |
Thank you We can close this issue if you want. I hope I can hear any updates in the future. |
Hello
Up to now, I was using
torchmetrics
in my training scripts running on GPUs.Now I want to use Google Tensor Processing Units in my work.
For the last few days, I am observing that
torchmetrics
is not compatible with XLA library.torchmetrics
needs to be lowered for TPU support.best regards
The text was updated successfully, but these errors were encountered: