You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I am trying to train a RNN model for a time series problem statement. Using Google Colab GPU backed instance, when I train, it doesn't use the GPU by itself as shown in training logs:
WARNING:darts.models.forecasting.torch_forecasting_model:DeprecationWarning: kwarg `verbose` is deprecated and will be removed in a future Darts version. Instead, control verbosity with PyTorch Lightning Trainer parameters `enable_progress_bar`, `progress_bar_refresh_rate` and `enable_model_summary` in the `pl_trainer_kwargs` dict at model creation.
GPU available: True, used: False
TPU available: False, using: 0 TPU cores
When i explicitly pass the 'cuda:0' string to the torch_device_str argument, I get a value error message:
ValueError: 'cuda' is not a valid DistributedType
Expected behavior
It should be able to train on the gpu instance.
The text was updated successfully, but these errors were encountered:
Describe the bug
I am trying to train a RNN model for a time series problem statement. Using Google Colab GPU backed instance, when I train, it doesn't use the GPU by itself as shown in training logs:
When i explicitly pass the 'cuda:0' string to the
torch_device_str
argument, I get a value error message:ValueError: 'cuda' is not a valid DistributedType
Expected behavior
It should be able to train on the gpu instance.
The text was updated successfully, but these errors were encountered: