Skip to content

csslc/PiSA-SR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pixel-level and Semantic-level Adjustable Super-resolution: A Dual-LoRA Approach

Lingchen Sun1,2 | Rongyuan Wu1,2 | Zhiyuan Ma1 | Shuaizheng Liu1,2 | Qiaosi Yi1,2 | Lei Zhang1,2

1The Hong Kong Polytechnic University, 2OPPO Research Institute

⏰ Update

  • 2025.1.2: Code and models are released.
  • 2024.12.4: The paper and this repo are released.

⭐ If PiSA-SR is helpful to your images or projects, please help star this repo. Thanks! 🤗

🌟 Overview Framework

PiSA-SR

(a) Training procedure of PiSA-SR. During the training process, two LoRA modules are respectively optimized for pixel-level and semantic-level enhancement.

(b) Inference procedure of PiSA-SR. During the inference stage, users can use the default setting to reconstruct the high-quality image in one-step diffusion or adjust λpix and λsem to control the strengths of pixel-level and semantic-level enhancement.

😍 Visual Results

Demo on Real-world SR

Demo on AIGC Enhancement

Adjustable SR Results

PiSA-SR

By increasing the guidance scale λpix on the pixel-level LoRA module, the image degradations such as noise and compression artifacts can be gradually removed; however, a too-strong λpix will make the SR image over-smoothed. By increasing the guidance scale λsem on the semantic-level LoRA module, the SR images will have more semantic details; nonetheless, a too-high λsem will generate visual artifacts.

Comparisons with Other DM-Based SR Methods

PiSA-SR

⚙ Dependencies and Installation

## git clone this repository
git clone https://github.com/csslc/PiSA-SR
cd PiSA-SR


# create an environment
conda create -n PiSA-SR python=3.10
conda activate PiSA-SR
pip install --upgrade pip
pip install -r requirements.txt

🍭 Quick Inference

Step 1: Download the pretrained models

Step 2: Prepare testing data

You can put the testing images in the preset/test_datasets.

Step 3: Running testing command

For default setting:

python test_pisasr.py \
--pretrained_model_path preset/models/stable-diffusion-2-1-base \
--pretrained_path preset/models/pisa_sr.pkl \
--process_size 512 \
--upscale 4 \
--input_image preset/test_datasets \
--output_dir experiments/test \
--default

For adjustable setting:

python test_pisasr.py \
--pretrained_model_path preset/models/stable-diffusion-2-1-base \
--pretrained_path preset/models/pisa_sr.pkl \
--process_size 512 \
--upscale 4 \
--input_image preset/test_datasets \
--output_dir experiments/test \
--lambda_pix 1.0 \
--lambda_sem 1.0

🛠️You can adjust lambda_pix and lambda_sem to control the strengths of pixel-wise fidelity and semantic-level details.

We integrate tile_diffusion and tile_vae to the test_pisasr.py to save the GPU memory for inference. You can change the tile size and stride according to the VRAM of your device.

python test_pisasr.py \
--pretrained_model_path preset/models/stable-diffusion-2-1-base \
--pretrained_path preset/models/pisa_sr.pkl \
--process_size 512 \
--upscale 4 \
--input_image preset/test_datasets \
--output_dir experiments/test \
--latent_tiled_size 96 \
--latent_tiled_overlap 32 \
--vae_encoder_tiled_size 1024 \
--vae_decoder_tiled_size 224 \
--default

Citations

If our code helps your research or work, please consider citing our paper. The following are BibTeX references:

@article{sun2024pisasr,
  title={Pixel-level and Semantic-level Adjustable Super-resolution: A Dual-LoRA Approach},
  author={Sun, Lingchen and Wu, Rongyuan and Ma, Zhiyuan and Liu, Shuaizheng and Yi, Qiaosi and Zhang, Lei},
  journal={arXiv preprint arXiv:2412.03017},
  year={2024}
}

License

This project is released under the Apache 2.0 license.

Acknowledgement

Contact

If you have any questions, please contact: ling-chen.sun@connect.polyu.hk

statistics

visitors

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published