-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Type Error in GPTLMHeadModel #3
Comments
Hi, Line 371 in e8de564
Due to the import structure here: Line 52 in e8de564
The options are to
Sorry for the difficulty -- we will fix the install / instructions for this |
No worries, and thanks for the speedy reply. Your guidance helped me get past the above error by installing the norm from flash-attn, but there seem to be more undocumented dependency issues:
I'm a little baffled, since it seems like
Are there additional subpackages within flash-attn that need to be installed? For reference, here is my updated dockerfile:
|
That line you pointed out requires this to be installed: https://github.com/Dao-AILab/flash-attention/tree/main/csrc/fused_dense_lib Would recommend cloning flash-attention and python setup.py install within this directory |
After some tweaking, I think I've got it working. I ended up using the HazyReasearch/flash-attention fork. For others trying via docker, this is the dockerfile I used:
It requires NVIDIA docker tookit to run, with the command:
|
Hi! I got a similar problem while running the sample code: import torch
from transformers import AutoTokenizer
from based.models.gpt import GPTLMHeadModel
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPTLMHeadModel.from_pretrained_hf("hazyresearch/based-360m").to("cuda", dtype=torch.float16)
input = tokenizer.encode("If I take one more step, it will be", return_tensors="pt").to("cuda")
output = model.generate(input, max_length=20)
print(tokenizer.decode(output[0])) Error:
so I used the Dockerfile given by @axelmagn and now I get:
Is this due to code changes 2 days ago or I am missing some steps? |
yes that was due to the changes, please try again and let me know if you run into issues |
it works now! 🎉 thank you! |
I am having a go at running inference and evaluation for this model, and running into a TypeError in
GPTLMHeadModel
:For reproducibility, I have been running this in a docker container:
Any idea what could be going wrong here?
The text was updated successfully, but these errors were encountered: