You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently downloaded your model from Hugging Face (Llama-2-7b-wbits2-lat) and tried to use ShiftAddLLM during inference. However, I found that instead of using the ShiftAdd method, the weights are directly unpacked back to FP16 format, which follows a traditional approach:
for i in tqdm(range(len(layers)), desc="Loading shiftaddllm low-bit weights", leave=False):
layer = layers[i]
subset = find_layers(layer)
for name in subset:
layer_name = f"{i}.{name}"
temp_storage_pt = os.path.join(weights_dir, f"{model_name}_{layer_name}_{wbits}bit.pt")
if os.path.exists(temp_storage_pt):
print(f"load from {temp_storage_pt}")
checkpoint = torch.load(temp_storage_pt)
BinaryWeight = checkpoint["bWeight"]
alpha = checkpoint["alpha"]
alpha = alpha.repeat_interleave(8, dim=0)
W = unpack_weight(BinaryWeight, alpha)
W = W.transpose(0, 1).contiguous()
subset[name].weight.data = W.to(subset[name].weight.data.dtype)
else:
print(f"WARNING: no such file {temp_storage_pt}")
This seems to contradict the ShiftAddLLM methodology as illustrated in Figure 1 of your paper. I couldn’t find the relevant code for the ShiftAdd inference process. Could you please advise on what modifications are needed to ensure that ShiftAdd is correctly applied during inference?
Thank you very much! Looking forward to your response.
Best regards,
Lucas
The text was updated successfully, but these errors were encountered:
Hello, @licj15 @ranery
I recently downloaded your model from Hugging Face (Llama-2-7b-wbits2-lat) and tried to use ShiftAddLLM during inference. However, I found that instead of using the ShiftAdd method, the weights are directly unpacked back to FP16 format, which follows a traditional approach:
This seems to contradict the ShiftAddLLM methodology as illustrated in Figure 1 of your paper. I couldn’t find the relevant code for the ShiftAdd inference process. Could you please advise on what modifications are needed to ensure that ShiftAdd is correctly applied during inference?
Thank you very much! Looking forward to your response.
Best regards,
Lucas
The text was updated successfully, but these errors were encountered: