[Paper]
Hang Guo1 | Yawei Li2 | Tao Dai3| Shu-Tao Xia1,4| Luca Benini2
1Tsinghua University, 2ETH Zurich, 3Shenzhen University, 4Pengcheng Laboratory
⭐ If our IntLoRA is helpful to your images or projects, please help star this repo. Thanks! 🤗
Our IntLoRA offers three key advantages: (i) for fine-tuning, the pre-trained weights are quantized, reducing memory usage; (ii) for storage, both pre-trained and low-rank weights are in INT which consumes less disk space; (iii) for inference, IntLoRA weights can be naturally merged into quantized pre-trained weights through efficient integer multiplication or bit-shifting, eliminating additional post-training quantization.
## git clone this repository
git clone https://github.com/csguoh/IntLoRA.git
cd ./IntLoRA
# create a conda environment
conda env create -f environment.yaml
conda activate intlora
-
This code repository contains Dreambooth fine-tuning using our IntloRA. One can download the subject driven generation datasets here.
-
You also need to download the pre-trained model weights which will be fine-tuned with our IntLoRA. Here, we use the Stable Diffusion-1.5 as a example.
-
The main file of the fine-tuning is defined in the
train_dreambooth_quant.py
. We have also give the off-the-shelf configuration bash file for you. Thus, one can directly train customized diffusion models with the following command.bash ./train_dreambooth_quant.sh
-
The following are some key parameters that you may want to modify.
rank
: the inner rank of the LoRA adapterintlora
: one can choose 'MUL' to use our InrLoRA-MUL or 'SHIFT' to use our InrLoRA-SHIFTnbits
: the number of bits of the weight quantization bitsuse_activation_quant
: whether to use the activation quantizationact_nbits
: the activation bits of the activation quantizationgradient_checkpointing
: whether to use the gradient checking to further reduce the GPU cost.
-
After run the fine-tunning command above, you can find the generated results in the
./log_quant
file fold.
- After generate the images, you can test the quality of each generated image using the following command:
python evaluation.py
- It will generate a
.json
file which contains the IQA results of each subject. Then we can obtain the overall evaluation result with
python get_results.py
If our code helps your research or work, please consider citing our paper. The following are BibTeX references:
@article{guo2024intlora,
title={IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion Models},
author={Guo, Hang and Li, Yawei and Dai, Tao and Xia, Shu-Tao and Benini, Luca},
journal={arXiv preprint arXiv:2410.21759},
year={2024}
}