You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I set ipdb in gpt.py, I encounter this error,torch._dynamo.exc.InternalTorchDynamoError: example_value needs to be a FakeTensorwrapped by this instance of Dynamo. Found: tensor(..., device='meta', size=(2,))
#58
Open
BinZhu-ece opened this issue
Aug 24, 2024
· 0 comments
/storage/zhubin/LlamaGen/autoregressive/models/gpt.py(343)forward()
342 if idx is not None and cond_idx is not None: # training or naive inference
--> 343 import ipdb; ipdb.set_trace()
344 cond_embeddings = self.cls_embedding(cond_idx, train=self.training)[:,:self.cls_token_num]
ipdb> n
torch._dynamo.exc.InternalTorchDynamoError: example_value needs to be a FakeTensorwrapped by this instance of Dynamo. Found: tensor(..., device='meta', size=(2,))
from user code:
File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 344, in torch_dynamo_resume_in_forward_at_343
cond_embeddings = self.cls_embedding(cond_idx, train=self.training)[:,:self.cls_token_num]
File "/storage/miniconda3/envs/motionctrl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 113, in forward
caption = self.token_drop(caption, force_drop_ids)
File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 104, in token_drop
drop_ids = torch.rand(caption.shape[0], device=caption.device) < self.uncond_prob
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The text was updated successfully, but these errors were encountered:
ipdb> n
torch._dynamo.exc.InternalTorchDynamoError:
example_value
needs to be aFakeTensor
wrapped by this instance of Dynamo. Found: tensor(..., device='meta', size=(2,))from user code:
File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 344, in torch_dynamo_resume_in_forward_at_343
cond_embeddings = self.cls_embedding(cond_idx, train=self.training)[:,:self.cls_token_num]
File "/storage/miniconda3/envs/motionctrl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 113, in forward
caption = self.token_drop(caption, force_drop_ids)
File "/storage/zhubin/LlamaGen/autoregressive/models/gpt.py", line 104, in token_drop
drop_ids = torch.rand(caption.shape[0], device=caption.device) < self.uncond_prob
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The text was updated successfully, but these errors were encountered: