Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] TypeError in explain function for TFT #1944

Closed
studentangerer opened this issue Aug 9, 2023 · 1 comment · Fixed by #1949
Closed

[BUG] TypeError in explain function for TFT #1944

studentangerer opened this issue Aug 9, 2023 · 1 comment · Fixed by #1949
Labels
bug Something isn't working

Comments

@studentangerer
Copy link

Description
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

in tft_explainer.py:211 -> can be fixed with
attention_heads = self.model.model._attn_out_weights.detach().cpu().numpy().sum(axis=-2)

another TypeError occurs in tft_explainer.py:506

To Reproduce
When executing darts/examples/13-TFT-examples.ipynb, without any modifications.

In particular the following line causes the error.
explainability_result = explainer.explain()

System:

  • Python version: 3.9.16
  • darts version: 0.25.0
@studentangerer studentangerer added bug Something isn't working triage Issue waiting for triaging labels Aug 9, 2023
@madtoinou madtoinou removed the triage Issue waiting for triaging label Aug 9, 2023
@dennisbader
Copy link
Collaborator

Hi @studentangerer, this is indeed a bug when running TFTModel on GPU. We'll fix this soon.

As a hotfix, you could save the model and then load it to cpu:

model.save("my_model.pt")
model_cpu = TFTModel.load("my_model.pt", map_location="cpu")
model_cpu.to_cpu()

explainer = TFTExplainer(model_cpu)

After that the explainer should work as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: Released
Development

Successfully merging a pull request may close this issue.

3 participants