-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default quantization- True or false in SparseGPT #2357
Comments
Thank you for reaching out and opening an issue on SparseML! The SparseGPTModifier no longer accepts a quantize argument, so you can safely remove it from your recipe. This will ensure that your model remains unquantized without affecting the pruning process. Additionally, I’d recommend considering our latest framework, LLMCompressor, which offers enhanced capabilities for model compression. If you're open to using it, the recipe would look slightly different: oneshot_stage:
pruning_modifiers:
SparseGPTModifier:
sparsity: 0.5
block_size: 128
sequential_update: true
percdamp: 0.01
mask_structure: "16:32"
targets: ["re:model.layers.\\d+$"]
|
Thank you, @rahul-tuli , will try |
Also, will the llm-compressor run on an AMD machine? |
Hi @sriyachakravarthy, I'd like to clarify a bit more about this. Our LLM Compressor flows are currently for vLLM / our compression pathways for GPUs and specifically for Transformers models. SparseML is still used to create compressed ONNX models that can run in DeepSparse and ONNX Runtime for NLP, NLG, and CV models. For AMD, SparseML will work for AMD CPUs, and LLM Compressor will work for AMD GPUs. Hope this helps! |
Yes, Thanks!! |
Hi! I do not see model size reduction after pruning using llmpcompressor framework. Kindly help |
Hi! in the recipe, if i do not want to quantize and perform structured pruning, is it okk to give quantize:false like below and do not provide QuantizationModifier in the recipe?
The text was updated successfully, but these errors were encountered: