Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Use ShiftAddLLM During Inference? Unable to Find Relevant Code #8

Open
xlim1996 opened this issue Oct 11, 2024 · 0 comments
Open

Comments

@xlim1996
Copy link

xlim1996 commented Oct 11, 2024

Hello, @licj15 @ranery

I recently downloaded your model from Hugging Face (Llama-2-7b-wbits2-lat) and tried to use ShiftAddLLM during inference. However, I found that instead of using the ShiftAdd method, the weights are directly unpacked back to FP16 format, which follows a traditional approach:

for i in tqdm(range(len(layers)), desc="Loading shiftaddllm low-bit weights", leave=False):
        layer = layers[i]
        subset = find_layers(layer)
        for name in subset:
            layer_name = f"{i}.{name}"
            temp_storage_pt = os.path.join(weights_dir, f"{model_name}_{layer_name}_{wbits}bit.pt")
            if os.path.exists(temp_storage_pt):
                print(f"load from {temp_storage_pt}")
                checkpoint = torch.load(temp_storage_pt)
                BinaryWeight = checkpoint["bWeight"]
                alpha = checkpoint["alpha"]
                
                alpha = alpha.repeat_interleave(8, dim=0)
                W = unpack_weight(BinaryWeight, alpha)
                W = W.transpose(0, 1).contiguous()
                subset[name].weight.data = W.to(subset[name].weight.data.dtype)
            else:
                print(f"WARNING: no such file {temp_storage_pt}")

This seems to contradict the ShiftAddLLM methodology as illustrated in Figure 1 of your paper. I couldn’t find the relevant code for the ShiftAdd inference process. Could you please advise on what modifications are needed to ensure that ShiftAdd is correctly applied during inference?

Thank you very much! Looking forward to your response.

Best regards,
Lucas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant