We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi. I am in the process of adding QuiP inference support into ExllamaV2 and this is the PR
The problem I am having right now is the my Ppl testing results is kind worse compare to your blog results.
so I am wondering is there something wrong with my implementation or any other reasons.
using dataset: [wikitext-2-v1_validation_0000.parquet] (https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-v1/validation)
The text was updated successfully, but these errors were encountered:
quip-sharp/lib/utils/gptq_data_utils.py
Line 12 in 6648e56
This is where we sample wikitext2. You should check fp16 results on your dataset for an accurate comparison on that dataset.
Sorry, something went wrong.
No branches or pull requests
Hi.
I am in the process of adding QuiP inference support into ExllamaV2
and this is the PR
The problem I am having right now is the my Ppl testing results is kind worse compare to your blog results.
so I am wondering is there something wrong with my implementation or any other reasons.
Ppl Benchmarks
using dataset: [wikitext-2-v1_validation_0000.parquet]
(https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-v1/validation)
The text was updated successfully, but these errors were encountered: