From 4f98b14465562e4a8f855f9488ba79a4350d2909 Mon Sep 17 00:00:00 2001 From: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Date: Tue, 28 May 2024 15:04:43 +0200 Subject: [PATCH] Docs / PEFT: Add PEFT API documentation (#31078) * add peft references * add peft references * Update docs/source/en/peft.md * Update docs/source/en/peft.md --- docs/source/en/peft.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/docs/source/en/peft.md b/docs/source/en/peft.md index d86a36e62487dc..9e2ac805b288af 100644 --- a/docs/source/en/peft.md +++ b/docs/source/en/peft.md @@ -81,6 +81,8 @@ model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ``` +Check out the [API documentation](#transformers.integrations.PeftAdapterMixin) section below for more details. + ## Load in 8bit or 4bit The `bitsandbytes` integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the `bitsandbytes` integration [guide](./quantization#bitsandbytes-integration) to learn more). Add the `load_in_8bit` or `load_in_4bit` parameters to [`~PreTrainedModel.from_pretrained`] and set `device_map="auto"` to effectively distribute the model to your hardware: @@ -227,6 +229,19 @@ lora_config = LoraConfig( model.add_adapter(lora_config) ``` +## API docs + +[[autodoc]] integrations.PeftAdapterMixin + - load_adapter + - add_adapter + - set_adapter + - disable_adapters + - enable_adapters + - active_adapters + - get_adapter_state_dict + + +