-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update CPT documentation #2229
Merged
Merged
Update CPT documentation #2229
Changes from all commits
Commits
Show all changes
40 commits
Select commit
Hold shift + click to select a range
92b9e1a
added CPT model to peft
tsachiblau e54d380
Merge branch 'huggingface:main' into main
tsachiblau 023f071
Merge branch 'huggingface:main' into main
tsachiblau 54cddaf
Merge branch 'huggingface:main' into main
tsachiblau 2dfe70f
Added arXiv link to the paper, integrated CPT into testing framework,…
tsachiblau ba4b115
Merge branch 'huggingface:main' into main
tsachiblau f8c8317
Merge branch 'huggingface:main' into main
tsachiblau bd2fc70
config: Added config check in __post_init__. Removed redundant initia…
tsachiblau b01b214
Merge branch 'main' of https://github.com/tsachiblau/peft_CPT
tsachiblau 6ed1723
Merge branch 'huggingface:main' into main
tsachiblau 77bb0b9
tests: Updated test_cpt and testing_common as per the PR requirements.
tsachiblau dbcdedf
Created cpt.md in package_regerence. Updated the prompting.md file. a…
tsachiblau f7138d4
Merge branch 'huggingface:main' into main
tsachiblau 0a5fb20
verifying that the model is causal LM
tsachiblau 7206db5
Changed CPTModel to CPTEmbedding
tsachiblau 24b0af9
merge with main branch
tsachiblau 81ffa09
make style
tsachiblau 130ec76
make style
tsachiblau 70067d8
make style
tsachiblau 9397314
make doc
tsachiblau 249713c
Merge branch 'huggingface:main' into main
tsachiblau 0a43473
Removed redundant checks
tsachiblau 144f042
Fixed errors
tsachiblau 97449da
merge with peft
tsachiblau dacb400
Minor code updates.
tsachiblau cc348a4
Minor code updates.
tsachiblau 79959d1
Merge branch 'huggingface:main' into main
tsachiblau 7eea892
Minor code updates.
tsachiblau 6d625c0
Merge branch 'huggingface:main' into main
tsachiblau d120d13
Merge branch 'huggingface:main' into main
tsachiblau 9ae9939
Update Doc
tsachiblau 2fada31
Update Doc
tsachiblau 43260c7
Merge remote-tracking branch 'origin/main'
tsachiblau ebf5aaa
Update Doc
tsachiblau b3b5f6e
Update notebook (works on colab)
tsachiblau e7de80e
Merge branch 'huggingface:main' into main
tsachiblau 41b382d
update doc
tsachiblau 604da6c
update doc
tsachiblau 122567c
update doc
tsachiblau 9ab5078
update doc
tsachiblau File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
|
||
# Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods | ||
## Introduction ([Paper](https://arxiv.org/abs/2410.17222), [Code](https://github.com/tsachiblau/Context-aware-Prompt-Tuning-Advancing-In-Context-Learning-with-Adversarial-Methods), [Notebook](cpt_train_and_inference.ipynb), [Colab](https://colab.research.google.com/drive/1UhQDVhZ9bDlSk1551SuJV8tIUmlIayta?usp=sharing)) | ||
|
||
> Large Language Models (LLMs) can perform few-shot learning using either optimization-based approaches or In-Context Learning (ICL). Optimization-based methods often suffer from overfitting, as they require updating a large number of parameters with limited data. In contrast, ICL avoids overfitting but typically underperforms compared to optimization-based methods and is highly sensitive to the selection, order, and format of demonstration examples. To overcome these challenges, we introduce Context-aware Prompt Tuning (CPT), a method inspired by ICL, Prompt Tuning (PT), and adversarial attacks. CPT builds on the ICL strategy of concatenating examples before the input, extending it by incorporating PT-like learning to refine the context embedding through iterative optimization, extracting deeper insights from the training examples. Our approach carefully modifies specific context tokens, considering the unique structure of the examples within the context. In addition to updating the context with PT-like optimization, CPT draws inspiration from adversarial attacks, adjusting the input based on the labels present in the context while preserving the inherent value of the user-provided data. To ensure robustness and stability during optimization, we employ a projected gradient descent algorithm, constraining token embeddings to remain close to their original values and safeguarding the quality of the context. Our method has demonstrated superior accuracy across multiple classification tasks using various LLM models, outperforming existing baselines and effectively addressing the overfitting challenge in few-shot learning. | ||
|
||
|
||
|
||
<div class="flex justify-center"> | ||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/cpt.png"/> | ||
</div> | ||
<small>CPT optimizing only specific token embeddings while keeping the rest of the model frozen <a href="https://huggingface.co/papers/2410.17222">(image source)</a>.</small> | ||
|
||
--- | ||
|
||
## Dataset Creation and Collation for CPT | ||
|
||
This document explains how to prepare datasets for CPT, linking the dataset preparation processes in the code to the methods and principles described in the CPT paper, specifically in **Sections 3.1**, **3.2**, and **3.3**. | ||
|
||
--- | ||
|
||
### Template-Based Tokenization | ||
|
||
#### The Role of Templates | ||
Templates define the structure of the input-output pairs, enabling the model to interpret the task within a unified context. | ||
|
||
- **Input Templates**: | ||
Templates like `"input: {sentence}"` structure raw input sentences. The `{sentence}` placeholder is replaced with the actual input text. | ||
|
||
- **Output Templates**: | ||
Templates such as `"output: {label}"` format the labels (e.g., `positive`, `negative`, etc.). | ||
|
||
- **Separator Tokens**: | ||
Separators distinguish different parts of the input, such as the input text and labels, as well as separate examples within the context. | ||
|
||
|
||
#### How CPT Utilizes Context Structure | ||
|
||
CPT leverages the context structure, encoded within the `cpt_tokens_type_mask`, to optimize the context effectively. to optimize the context effectively. By treating different token types based on their roles, the model updates some tokens while using others solely for optimization: | ||
|
||
1. **Refrain from Updating Label Tokens**: | ||
Some context tokens represent label tokens, which contain valuable, unmodifiable information. By excluding these tokens from updates during training, CPT ensures that the labels remain fixed, preserving their integrity. | ||
|
||
2. **Apply Type-Specific Projection Norms**: | ||
CPT employs Projected Gradient Descent (PGD) to update context embeddings, applying tailored norms to different context parts. This approach reduces overfitting while maintaining robustness and generalization by preserving the integrity of user-provided examples. | ||
|
||
|
||
|
||
#### Limitations | ||
CPT is designed for few-shot scenarios, as concatenating more examples increases memory usage due to the self-attention mechanism and additional loss terms. For larger datasets, users can limit the number of context examples and use the remaining samples solely for optimization to manage memory efficiently. | ||
|
||
|
||
|
||
|
||
## Citation | ||
```bib | ||
@article{ | ||
blau2025cpt, | ||
title={Context-Aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods}, | ||
author={Tsachi Blau, Moshe Kimhi, Yonatan Belinkov, Alexander Bronstein, Chaim Baskin}, | ||
journal={arXiv preprint arXiv:2410.17222}}, | ||
year={2025} | ||
} | ||
``` |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove.