Chenyang Zhu, Kai Li, Yue Ma, Longxiang Tang, Chengyu Fang, Chubin Chen, Qifeng Chen and Xiu Li
Arxiv 2024
Recent advances in Customized Concept Swapping (CCS) enable a text-to-image model to swap a concept in the source image with a customized target concept. However, the existing methods still face the challenges of inconsistency and inefficiency. They struggle to maintain consistency in both the foreground and background during concept swapping, especially when the shape difference is large between objects. Additionally, they either require time-consuming training processes or involve redundant calculations during inference. To tackle these issues, we introduce InstantSwap, a new CCS method that aims to handle sharp shape disparity at speed. Specifically, we first extract the bbox of the object in the source image automatically based on attention map analysis and leverage the bbox to achieve both foreground and background consistency. For background consistency, we remove the gradient outside the bbox during the swapping process so that the background is free from being modified. For foreground consistency, we employ a cross-attention mechanism to inject semantic information into both source and target concepts inside the box. This helps learn semantic-enhanced representations that encourage the swapping process to focus on the foreground objects. To improve swapping speed, we avoid computing gradients at each timestep but instead calculate them periodically to reduce the number of forward passes, which improves efficiency a lot with a little sacrifice on performance. Finally, we establish a benchmark dataset to facilitate comprehensive evaluation. Extensive evaluations demonstrate the superiority and versatility of InstantSwap.
- [2024.12.2] 🔥 Release Paper and Project page!
We are working hard to clean up the code and will open source it as soon as we get permission.
- Inference Code
- Evaluation Code
More results can be found in our Project page.
@misc{zhu2024instantswapfastcustomizedconcept,
title={InstantSwap: Fast Customized Concept Swapping across Sharp Shape Differences},
author={Chenyang Zhu and Kai Li and Yue Ma and Longxiang Tang and Chengyu Fang and Chubin Chen and Qifeng Chen and Xiu Li},
year={2024},
eprint={2412.01197},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.01197},
}
This repository borrows heavily from Prompt-to-prompt, PnP Inversion and 🤗Diffusers. Thanks to the authors for sharing their code and models.
This is the codebase for our research work. We are still working hard to update this repo, and more details are coming in days. If you have any questions or ideas to discuss, feel free to contact Chenyang Zhu.