Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert-pth-to-ggml.py how to handle torch.view_as_complex #225

Closed
haolongzhangm opened this issue Mar 17, 2023 · 3 comments
Closed

convert-pth-to-ggml.py how to handle torch.view_as_complex #225

haolongzhangm opened this issue Mar 17, 2023 · 3 comments
Labels
need more info The OP should provide more details about the issue question Further information is requested

Comments

@haolongzhangm
Copy link

llama code block include view_as_real: https://github.com/facebookresearch/llama/blob/main/llama/model.py#L68

how to convert-pth-to-ggml.py handle this part of weight

@gjmulder gjmulder added question Further information is requested need more info The OP should provide more details about the issue labels Mar 17, 2023
@gjmulder
Copy link
Collaborator

Please improve your question with more text and examples so it is easier to understand what you are asking.

@nullhook
Copy link

If you are asking about applying rotary embeddings, then that is done in the llama.cpp file and not during conversion

@haolongzhangm
Copy link
Author

@nullhook ths for u info

Deadsg pushed a commit to Deadsg/llama.cpp that referenced this issue Dec 19, 2023

Verified

This commit was signed with the committer’s verified signature.
Ayush1325 Ayush
Fixed CUBLAS DLL load issues on Windows
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need more info The OP should provide more details about the issue question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants