Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix future deprecate prepare_model_for_int8_training #143

Conversation

NanoCode012
Copy link
Collaborator

@NanoCode012 NanoCode012 commented Jun 2, 2023

Closes #91
Closes #133

It seems correct (following source code). GPTQ does not need this, so I moved it to else: block.

https://github.com/huggingface/peft/blob/42a184f7423fc0bbc102a085851a8fb6e40132ad/src/peft/utils/other.py#L75-L80

  • Test

@NanoCode012 NanoCode012 force-pushed the fix/deprecate-prepare-8bit-training branch from edf0810 to df9528f Compare June 8, 2023 12:42
@NanoCode012
Copy link
Collaborator Author

NanoCode012 commented Jun 8, 2023

Rebased!

Has been tested to run openllama 3B lora

Requires Peft > huggingface/peft@3714aa2

@NanoCode012 NanoCode012 marked this pull request as ready for review June 8, 2023 13:21
@NanoCode012 NanoCode012 requested a review from winglian June 8, 2023 13:58
@NanoCode012
Copy link
Collaborator Author

If anyone has issues with prepare_model_for_kbit_training not found, please install latest peft following Readme.

@NanoCode012 NanoCode012 merged commit 73e9ea4 into axolotl-ai-cloud:main Jun 8, 2023
@NanoCode012 NanoCode012 deleted the fix/deprecate-prepare-8bit-training branch June 8, 2023 14:07
mkeoliya pushed a commit to mkeoliya/axolotl that referenced this pull request Dec 15, 2023
…e-prepare-8bit-training

Fix future deprecate prepare_model_for_int8_training
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants