-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
160 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,160 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 1, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import torch\n", | ||
"from peft.utils.save_and_load import load_peft_weights" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Below, we show how to load a model and flatten the weights to create a datapoint for the dataset of model weights. You will have to loop this over all the fine-tuned model weights and concatenate them along the first dimension to obtain an mxn tensor, where there are m models and n is the dimensionality of the flattened LoRA weights. In order to have a one-to-one correspondence for each model with the dataset of weights we provide, you will have to first sort the models in increasing alphanumeric ordering based on the folder name it was trained on, then concatenate them in that order. You can find the labeled attributes for each set of images using the ``identity_df.pt`` that we provide, and indexing into the ``file`` attribute, which should correspond to the number in the folder name." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 2, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"\n", | ||
"adapters_weights = load_peft_weights(\"models/0/unet\", device=\"cuda:0\")\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 3, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"torch.Size([1, 99648])\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"w = []\n", | ||
"for key in adapters_weights.keys():\n", | ||
" w.append(adapters_weights[key].flatten())\n", | ||
"w = torch.cat(w, 0).unsqueeze(0)\n", | ||
"print(w.shape)\n", | ||
"\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# PCA Utils" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 4, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"def pca(a):\n", | ||
" mean = torch.mean(a, dim=0)\n", | ||
" std = torch.std(a, dim=0)\n", | ||
" scaled_data = (a - mean)/std\n", | ||
" u,s,v = torch.svd(scaled_data)\n", | ||
" return u,s,v, mean, std" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Example use of PCA function" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"flat_weights is a mxn dimensional tensor, where m is the number of models, and n is the dimensionality of the flattened LoRA weights. To obtain flat_weights, you have to run the flattening code from above over all the models and then concatenate them over the first dimension to get the mxn tensor. If you run out of memory during PCA, replace ``u,s,v = torch.svd(scaled_data)`` in the pca function with: <br>\n", | ||
"``u,s,v = torch.svd_lowrank(flat_weights, q=10000, niter=20, M=None)``." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 7, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"u,s,v, mean, std = pca(flat_weights) " | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Below, we provide example code of how to project the original weights onto the first k principal components and unproject back to the original LoRA weight space. " | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 8, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"def project(a,v,k, mean, std):\n", | ||
" data = (a-mean)/std\n", | ||
" new = torch.matmul(data,v[:, :k])\n", | ||
" return new" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 9, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"def unproject(projected,v,k, mean, std):\n", | ||
" new = torch.matmul(projected,v[:, :k].T)\n", | ||
" new = new*std+mean\n", | ||
" return new" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"projection = project(w, v, 10000, mean std)\n", | ||
"unprojection = unproject(projection, v, 10000, mean, std)" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.8.18" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |