Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing error in running BCQ for model/llama #5

Closed
wants to merge 6 commits into from

Conversation

viraatdas
Copy link

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 model/llama.py meta-llama/Llama-2-13b-hf --wbits 3 --groupsize 128 --acc --bcq --bcq_round 50 --load BCQ_ACC_13b_HF # bcq_round 20 works too, bigger - slower - maybe better

This command works now and properly runs evaluation.

@viraatdas
Copy link
Author

Tied to this issue: #4

@viraatdas
Copy link
Author

Closing this PR as it hasn't been merged for a while. Feel free to take a look at the changes and notify me if interested in reopening this PR.

@viraatdas viraatdas closed this Sep 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant