-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2:4 sparsity + PTQ(int8) model's inference #134
Comments
Hi @RanchiZhao yes, please see #36, which has a benchmark script and the subclasses. It would be a good idea to add a beginner tutorial as well. |
Hi @jcaip Thanks, I saw this before but didn't get a chance to look it over carefully, i'll do it now |
oh, another thing, is this method available in LLM like LLaMA? |
It should work for LLMs but the speedup characteristics depend on the matmul shapes and depending on what you are trying to do you will see more / less speedups. You will also need the model to be |
thanks a lot, another interesting thing is that, i want do PEFT(like LoRA) on the sparse&quant model. Is this possible?
|
Do you have some reference papers for PEFT + sparsity? I am also interested in that space as well, but have not been following actively. It's impossible to say for sure without knowing the exact approach, but theoretically I believe some version of this should be possible, although accuracy is likely prohibitive. In terms of implementation though, this is not something directly supported by our APIs. You may be able to hack something together but we do not plan to add this functionality ATM. We may consider it down the line, so for anyone reading who's interested please react / +1 this comment. |
no, AFAIK no, I' ll keep finding |
Are there any runnable demos of using Sparse-QAT/PTQ (2:4) to accelerate inference, such as applying PTQ to a 2:4 sparse LLaMA for inference acceleration? I am curious about the potential speedup ratio this could achieve.
The overall pipeline might be: compressing the Weight matrix using 2:4 sparsity and quantizing it to INT8 format through PTQ/QAT. The Activation matrix should also be quantized to INT8 format through PTQ/QAT. After such processing, the main type of computation would be INT8*INT8.
I would like to know if there is a tutorial document available, as I am a beginner in the field of quantization.
Thx!
The text was updated successfully, but these errors were encountered: