Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix cpu bnb path #34647

Merged
merged 10 commits into from
Nov 19, 2024
Merged

fix cpu bnb path #34647

merged 10 commits into from
Nov 19, 2024

Conversation

jiqing-feng
Copy link
Contributor

@jiqing-feng jiqing-feng commented Nov 8, 2024

Hi @Rocketknight1 @ArthurZucker @SunMarc @gante

We have "cpu" in hf_device_map when using bnb model in CPU. The cpu device bnb model should be accepted by transformers because CPU backend has been enabled in BNB. Please take a review, thx!

@Rocketknight1
Copy link
Member

Looks good, but do you have an example of code that failed before, that is fixed by this PR?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@jiqing-feng
Copy link
Contributor Author

Looks good, but do you have an example of code that failed before, that is fixed by this PR?

I run this case in a CPU-only device

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

model_id = "Felladrin/Llama-68M-Chat-v1"

text = ["I am happy because", "This is"]
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
input_ids = tokenizer(text, return_tensors="pt", padding=True)

quantization_config = BitsAndBytesConfig(load_in_8bit=True)

model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=quantization_config)
model.generation_config.cache_implementation = "static"
model.generate(**input_ids)

Error: IndexError: list index out of range

@jiqing-feng
Copy link
Contributor Author

Hi @Rocketknight1 . The script can reproduce this error easily both on AWQ and BNB. Besides, it's not safe to get index without checking the list length.

@jiqing-feng
Copy link
Contributor Author

Hi @Rocketknight1 @ArthurZucker @SunMarc @gante @zucchini-nlp , do you mind reviewing this change? Thanks

@jiqing-feng
Copy link
Contributor Author

Hi @Titus-von-Koeller . This change is needed for bitsandbytes cpu path, can you help to review it?

Besides, it's also needed for AWQ cpu path which is already enabled here: #33460

@jiqing-feng
Copy link
Contributor Author

jiqing-feng commented Nov 14, 2024

The following script can reproduce the AWQ error:
I run this case on a CPU-only device.

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AwqConfig

model_id = "PrunaAI/JackFram-llama-68m-AWQ-4bit-smashed"

text = ["I am happy because", "This is"]
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
input_ids = tokenizer(text, return_tensors="pt", padding=True)

# quantization_config = BitsAndBytesConfig(load_in_8bit=True)
quantization_config = AwqConfig(version="ipex")

model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=quantization_config)
model.generation_config.cache_implementation = "static"
model.generate(**input_ids)

@jiqing-feng
Copy link
Contributor Author

Hi @aymeric-roucher @LysandreJik , do you have time to review this bug fix? Thanks!

@zucchini-nlp
Copy link
Member

@jiqing-feng sorry for late reply, the transformers team was off last week and someone will review the PR soon. Seems like you tagged all relevant people already

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix ! I've suggested a similar fix that follows more closely what we have in accelerate. LMK if this works for you !

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
@jiqing-feng
Copy link
Contributor Author

Hi @SunMarc , I have applied your changes, thanks!

@SunMarc SunMarc requested a review from LysandreJik November 18, 2024 14:45
Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks !

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🤗

@ArthurZucker ArthurZucker merged commit 5de58d5 into huggingface:main Nov 19, 2024
20 of 22 checks passed
BernardZach pushed a commit to BernardZach/transformers that referenced this pull request Dec 5, 2024
* fix cpu bnb path

* Update src/transformers/generation/utils.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* fix awq quantizer env check

* fix awq quantizer device check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

---------

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
@jiqing-feng jiqing-feng deleted the bnb branch December 19, 2024 02:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants