You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I downloaded the Llama-2-7b-hf-2bit-32rank and Llama-2-7b-hf-4bit-32rank models from Hugging Face and ran fine-tuning using train_clm.py. However, both models consumed the same amount of GPU memory and took the same time to fine-tune. Can you tell me why this is happening?
The text was updated successfully, but these errors were encountered:
I downloaded the Llama-2-7b-hf-2bit-32rank and Llama-2-7b-hf-4bit-32rank models from Hugging Face and ran fine-tuning using train_clm.py. However, both models consumed the same amount of GPU memory and took the same time to fine-tune. Can you tell me why this is happening?
The text was updated successfully, but these errors were encountered: