Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2:4 sparsity + PTQ(int8) model's inference #134

Open
RanchiZhao opened this issue Apr 13, 2024 · 7 comments
Open

2:4 sparsity + PTQ(int8) model's inference #134

RanchiZhao opened this issue Apr 13, 2024 · 7 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@RanchiZhao
Copy link

Are there any runnable demos of using Sparse-QAT/PTQ (2:4) to accelerate inference, such as applying PTQ to a 2:4 sparse LLaMA for inference acceleration? I am curious about the potential speedup ratio this could achieve.
The overall pipeline might be: compressing the Weight matrix using 2:4 sparsity and quantizing it to INT8 format through PTQ/QAT. The Activation matrix should also be quantized to INT8 format through PTQ/QAT. After such processing, the main type of computation would be INT8*INT8.
I would like to know if there is a tutorial document available, as I am a beginner in the field of quantization.
Thx!

@jcaip
Copy link
Contributor

jcaip commented Apr 17, 2024

Hi @RanchiZhao yes, please see #36, which has a benchmark script and the subclasses. It would be a good idea to add a beginner tutorial as well.

@RanchiZhao
Copy link
Author

RanchiZhao commented Apr 17, 2024

Hi @jcaip Thanks, I saw this before but didn't get a chance to look it over carefully, i'll do it now

@RanchiZhao
Copy link
Author

RanchiZhao commented Apr 17, 2024

oh, another thing, is this method available in LLM like LLaMA?
And I wanna do this with Hugging Face's transformers, maybe tough to do.

@jcaip
Copy link
Contributor

jcaip commented Apr 17, 2024

It should work for LLMs but the speedup characteristics depend on the matmul shapes and depending on what you are trying to do you will see more / less speedups. You will also need the model to be torch.compile traceable, as we use the torchao quantization workflow.

@RanchiZhao
Copy link
Author

RanchiZhao commented Apr 17, 2024

thanks a lot, another interesting thing is that, i want do PEFT(like LoRA) on the sparse&quant model.
once we get the trained LoRA modules, we can add them into original models(the bf16 one), and do sparse&quant on it again.
now we get a "sparse&quant aware training" model, we can use it to do inference.

Is this possible?
we should make sure:

  • we can put LoRA on the sparse&quant model
  • merging LoRA modules into the bf16 model(instead of int8 one because of the dtype conflict) works

@jcaip
Copy link
Contributor

jcaip commented Apr 17, 2024

Do you have some reference papers for PEFT + sparsity? I am also interested in that space as well, but have not been following actively.

It's impossible to say for sure without knowing the exact approach, but theoretically I believe some version of this should be possible, although accuracy is likely prohibitive. In terms of implementation though, this is not something directly supported by our APIs. You may be able to hack something together but we do not plan to add this functionality ATM. We may consider it down the line, so for anyone reading who's interested please react / +1 this comment.

@RanchiZhao
Copy link
Author

no, AFAIK no, I' ll keep finding

@msaroufim msaroufim added enhancement New feature or request help wanted Extra attention is needed labels May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants