Skip to content

Commit

Permalink
Docs / PEFT: Add PEFT API documentation (huggingface#31078)
Browse files Browse the repository at this point in the history
* add peft references

* add peft references

* Update docs/source/en/peft.md

* Update docs/source/en/peft.md
  • Loading branch information
younesbelkada committed May 28, 2024
1 parent 779bc36 commit 4f98b14
Showing 1 changed file with 15 additions and 0 deletions.
15 changes: 15 additions & 0 deletions docs/source/en/peft.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,8 @@ model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
```

Check out the [API documentation](#transformers.integrations.PeftAdapterMixin) section below for more details.

## Load in 8bit or 4bit

The `bitsandbytes` integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the `bitsandbytes` integration [guide](./quantization#bitsandbytes-integration) to learn more). Add the `load_in_8bit` or `load_in_4bit` parameters to [`~PreTrainedModel.from_pretrained`] and set `device_map="auto"` to effectively distribute the model to your hardware:
Expand Down Expand Up @@ -227,6 +229,19 @@ lora_config = LoraConfig(
model.add_adapter(lora_config)
```

## API docs

[[autodoc]] integrations.PeftAdapterMixin
- load_adapter
- add_adapter
- set_adapter
- disable_adapters
- enable_adapters
- active_adapters
- get_adapter_state_dict




<!--
TODO: (@younesbelkada @stevhliu)
Expand Down

0 comments on commit 4f98b14

Please sign in to comment.